00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4080 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3670 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.048 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.090 > git --version # 'git version 2.39.2' 00:00:00.090 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.115 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.115 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.679 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.689 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.689 > git config core.sparsecheckout # timeout=10 00:00:02.700 > git read-tree -mu HEAD # timeout=10 00:00:02.714 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.732 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.732 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.799 [Pipeline] Start of Pipeline 00:00:02.813 [Pipeline] library 00:00:02.815 Loading library shm_lib@master 00:00:02.815 Library shm_lib@master is cached. Copying from home. 00:00:02.829 [Pipeline] node 00:00:02.850 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.852 [Pipeline] { 00:00:02.862 [Pipeline] catchError 00:00:02.863 [Pipeline] { 00:00:02.879 [Pipeline] wrap 00:00:02.887 [Pipeline] { 00:00:02.895 [Pipeline] stage 00:00:02.897 [Pipeline] { (Prologue) 00:00:02.913 [Pipeline] echo 00:00:02.914 Node: VM-host-WFP7 00:00:02.918 [Pipeline] cleanWs 00:00:02.929 [WS-CLEANUP] Deleting project workspace... 00:00:02.929 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.936 [WS-CLEANUP] done 00:00:03.169 [Pipeline] setCustomBuildProperty 00:00:03.241 [Pipeline] httpRequest 00:00:03.618 [Pipeline] echo 00:00:03.620 Sorcerer 10.211.164.20 is alive 00:00:03.630 [Pipeline] retry 00:00:03.632 [Pipeline] { 00:00:03.646 [Pipeline] httpRequest 00:00:03.650 HttpMethod: GET 00:00:03.651 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.652 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.653 Response Code: HTTP/1.1 200 OK 00:00:03.653 Success: Status code 200 is in the accepted range: 200,404 00:00:03.654 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.799 [Pipeline] } 00:00:03.811 [Pipeline] // retry 00:00:03.818 [Pipeline] sh 00:00:04.106 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.123 [Pipeline] httpRequest 00:00:04.442 [Pipeline] echo 00:00:04.444 Sorcerer 10.211.164.20 is alive 00:00:04.452 [Pipeline] retry 00:00:04.454 [Pipeline] { 00:00:04.467 [Pipeline] httpRequest 00:00:04.473 HttpMethod: GET 00:00:04.473 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:04.474 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:04.475 Response Code: HTTP/1.1 200 OK 00:00:04.476 Success: Status code 200 is in the accepted range: 200,404 00:00:04.476 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:22.524 [Pipeline] } 00:00:22.542 [Pipeline] // retry 00:00:22.550 [Pipeline] sh 00:00:22.837 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:25.393 [Pipeline] sh 00:00:25.678 + git -C spdk log --oneline -n5 00:00:25.678 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:00:25.678 5592070b3 doc: update nvmf_tracing.md 00:00:25.678 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:00:25.678 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:00:25.678 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:00:25.700 [Pipeline] withCredentials 00:00:25.713 > git --version # timeout=10 00:00:25.728 > git --version # 'git version 2.39.2' 00:00:25.747 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:25.749 [Pipeline] { 00:00:25.759 [Pipeline] retry 00:00:25.761 [Pipeline] { 00:00:25.778 [Pipeline] sh 00:00:26.063 + git ls-remote http://dpdk.org/git/dpdk main 00:00:26.337 [Pipeline] } 00:00:26.356 [Pipeline] // retry 00:00:26.361 [Pipeline] } 00:00:26.378 [Pipeline] // withCredentials 00:00:26.388 [Pipeline] httpRequest 00:00:26.744 [Pipeline] echo 00:00:26.746 Sorcerer 10.211.164.20 is alive 00:00:26.756 [Pipeline] retry 00:00:26.759 [Pipeline] { 00:00:26.773 [Pipeline] httpRequest 00:00:26.778 HttpMethod: GET 00:00:26.779 URL: http://10.211.164.20/packages/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:00:26.780 Sending request to url: http://10.211.164.20/packages/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:00:26.799 Response Code: HTTP/1.1 200 OK 00:00:26.800 Success: Status code 200 is in the accepted range: 200,404 00:00:26.800 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:18.249 [Pipeline] } 00:01:18.267 [Pipeline] // retry 00:01:18.275 [Pipeline] sh 00:01:18.562 + tar --no-same-owner -xf dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:19.959 [Pipeline] sh 00:01:20.249 + git -C dpdk log --oneline -n5 00:01:20.249 f86085caab app/testpmd: avoid potential outside of array reference 00:01:20.249 4c2e746842 app/testpmd: remove redundant policy action condition 00:01:20.249 357f915ef5 test/eal: fix lcore check 00:01:20.249 b3e64fe596 test/eal: fix loop coverage for alignment macros 00:01:20.249 c6f484adf1 test/crypto: fix TLS zero length record check 00:01:20.261 [Pipeline] writeFile 00:01:20.270 [Pipeline] sh 00:01:20.548 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.560 [Pipeline] sh 00:01:20.843 + cat autorun-spdk.conf 00:01:20.843 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.843 SPDK_RUN_ASAN=1 00:01:20.843 SPDK_RUN_UBSAN=1 00:01:20.843 SPDK_TEST_RAID=1 00:01:20.843 SPDK_TEST_NATIVE_DPDK=main 00:01:20.843 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.843 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.851 RUN_NIGHTLY=1 00:01:20.853 [Pipeline] } 00:01:20.866 [Pipeline] // stage 00:01:20.881 [Pipeline] stage 00:01:20.884 [Pipeline] { (Run VM) 00:01:20.897 [Pipeline] sh 00:01:21.212 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.212 + echo 'Start stage prepare_nvme.sh' 00:01:21.212 Start stage prepare_nvme.sh 00:01:21.212 + [[ -n 0 ]] 00:01:21.212 + disk_prefix=ex0 00:01:21.212 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:21.212 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:21.212 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:21.212 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.212 ++ SPDK_RUN_ASAN=1 00:01:21.212 ++ SPDK_RUN_UBSAN=1 00:01:21.212 ++ SPDK_TEST_RAID=1 00:01:21.212 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:21.212 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:21.212 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.212 ++ RUN_NIGHTLY=1 00:01:21.212 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:21.212 + nvme_files=() 00:01:21.212 + declare -A nvme_files 00:01:21.212 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.212 + nvme_files['nvme.img']=5G 00:01:21.212 + nvme_files['nvme-cmb.img']=5G 00:01:21.212 + nvme_files['nvme-multi0.img']=4G 00:01:21.212 + nvme_files['nvme-multi1.img']=4G 00:01:21.212 + nvme_files['nvme-multi2.img']=4G 00:01:21.212 + nvme_files['nvme-openstack.img']=8G 00:01:21.212 + nvme_files['nvme-zns.img']=5G 00:01:21.212 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.212 + (( SPDK_TEST_FTL == 1 )) 00:01:21.212 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.212 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:21.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.212 + for nvme in "${!nvme_files[@]}" 00:01:21.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:21.473 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.473 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:21.473 + echo 'End stage prepare_nvme.sh' 00:01:21.473 End stage prepare_nvme.sh 00:01:21.485 [Pipeline] sh 00:01:21.769 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.769 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:21.769 00:01:21.769 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:21.769 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:21.769 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:21.769 HELP=0 00:01:21.769 DRY_RUN=0 00:01:21.769 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:21.769 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.769 NVME_AUTO_CREATE=0 00:01:21.769 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:21.769 NVME_CMB=,, 00:01:21.769 NVME_PMR=,, 00:01:21.769 NVME_ZNS=,, 00:01:21.769 NVME_MS=,, 00:01:21.769 NVME_FDP=,, 00:01:21.769 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.769 SPDK_VAGRANT_VMCPU=10 00:01:21.769 SPDK_VAGRANT_VMRAM=12288 00:01:21.769 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.769 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.769 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.769 SPDK_OPENSTACK_NETWORK=0 00:01:21.769 VAGRANT_PACKAGE_BOX=0 00:01:21.769 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.769 FORCE_DISTRO=true 00:01:21.769 VAGRANT_BOX_VERSION= 00:01:21.769 EXTRA_VAGRANTFILES= 00:01:21.769 NIC_MODEL=virtio 00:01:21.769 00:01:21.769 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:21.769 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:23.675 Bringing machine 'default' up with 'libvirt' provider... 00:01:23.935 ==> default: Creating image (snapshot of base box volume). 00:01:24.195 ==> default: Creating domain with the following settings... 00:01:24.195 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732661163_3a0ed57cf0d8b551941e 00:01:24.195 ==> default: -- Domain type: kvm 00:01:24.195 ==> default: -- Cpus: 10 00:01:24.195 ==> default: -- Feature: acpi 00:01:24.195 ==> default: -- Feature: apic 00:01:24.195 ==> default: -- Feature: pae 00:01:24.195 ==> default: -- Memory: 12288M 00:01:24.195 ==> default: -- Memory Backing: hugepages: 00:01:24.195 ==> default: -- Management MAC: 00:01:24.195 ==> default: -- Loader: 00:01:24.195 ==> default: -- Nvram: 00:01:24.195 ==> default: -- Base box: spdk/fedora39 00:01:24.195 ==> default: -- Storage pool: default 00:01:24.195 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732661163_3a0ed57cf0d8b551941e.img (20G) 00:01:24.195 ==> default: -- Volume Cache: default 00:01:24.195 ==> default: -- Kernel: 00:01:24.195 ==> default: -- Initrd: 00:01:24.195 ==> default: -- Graphics Type: vnc 00:01:24.195 ==> default: -- Graphics Port: -1 00:01:24.195 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.195 ==> default: -- Graphics Password: Not defined 00:01:24.195 ==> default: -- Video Type: cirrus 00:01:24.195 ==> default: -- Video VRAM: 9216 00:01:24.195 ==> default: -- Sound Type: 00:01:24.195 ==> default: -- Keymap: en-us 00:01:24.195 ==> default: -- TPM Path: 00:01:24.195 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.195 ==> default: -- Command line args: 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.195 ==> default: -> value=-drive, 00:01:24.195 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.195 ==> default: -> value=-drive, 00:01:24.195 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.195 ==> default: -> value=-drive, 00:01:24.195 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.195 ==> default: -> value=-drive, 00:01:24.195 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.195 ==> default: -> value=-device, 00:01:24.195 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.196 ==> default: Creating shared folders metadata... 00:01:24.196 ==> default: Starting domain. 00:01:26.105 ==> default: Waiting for domain to get an IP address... 00:01:44.228 ==> default: Waiting for SSH to become available... 00:01:44.228 ==> default: Configuring and enabling network interfaces... 00:01:49.515 default: SSH address: 192.168.121.63:22 00:01:49.515 default: SSH username: vagrant 00:01:49.515 default: SSH auth method: private key 00:01:52.056 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:00.192 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:06.773 ==> default: Mounting SSHFS shared folder... 00:02:08.683 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.683 ==> default: Checking Mount.. 00:02:10.063 ==> default: Folder Successfully Mounted! 00:02:10.063 ==> default: Running provisioner: file... 00:02:11.464 default: ~/.gitconfig => .gitconfig 00:02:11.724 00:02:11.724 SUCCESS! 00:02:11.724 00:02:11.724 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:11.724 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.724 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:11.724 00:02:11.735 [Pipeline] } 00:02:11.751 [Pipeline] // stage 00:02:11.761 [Pipeline] dir 00:02:11.762 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:11.763 [Pipeline] { 00:02:11.777 [Pipeline] catchError 00:02:11.779 [Pipeline] { 00:02:11.792 [Pipeline] sh 00:02:12.075 + vagrant ssh-config --host vagrant 00:02:12.075 + sed -ne /^Host/,$p 00:02:12.075 + tee ssh_conf 00:02:14.616 Host vagrant 00:02:14.616 HostName 192.168.121.63 00:02:14.616 User vagrant 00:02:14.616 Port 22 00:02:14.616 UserKnownHostsFile /dev/null 00:02:14.616 StrictHostKeyChecking no 00:02:14.616 PasswordAuthentication no 00:02:14.616 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:14.616 IdentitiesOnly yes 00:02:14.616 LogLevel FATAL 00:02:14.616 ForwardAgent yes 00:02:14.616 ForwardX11 yes 00:02:14.616 00:02:14.632 [Pipeline] withEnv 00:02:14.634 [Pipeline] { 00:02:14.649 [Pipeline] sh 00:02:14.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:14.933 source /etc/os-release 00:02:14.933 [[ -e /image.version ]] && img=$(< /image.version) 00:02:14.933 # Minimal, systemd-like check. 00:02:14.933 if [[ -e /.dockerenv ]]; then 00:02:14.933 # Clear garbage from the node's name: 00:02:14.933 # agt-er_autotest_547-896 -> autotest_547-896 00:02:14.933 # $HOSTNAME is the actual container id 00:02:14.933 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:14.933 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:14.933 # We can assume this is a mount from a host where container is running, 00:02:14.933 # so fetch its hostname to easily identify the target swarm worker. 00:02:14.933 container="$(< /etc/hostname) ($agent)" 00:02:14.933 else 00:02:14.933 # Fallback 00:02:14.933 container=$agent 00:02:14.933 fi 00:02:14.933 fi 00:02:14.933 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:14.933 00:02:15.205 [Pipeline] } 00:02:15.221 [Pipeline] // withEnv 00:02:15.231 [Pipeline] setCustomBuildProperty 00:02:15.248 [Pipeline] stage 00:02:15.251 [Pipeline] { (Tests) 00:02:15.269 [Pipeline] sh 00:02:15.553 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.827 [Pipeline] sh 00:02:16.110 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.385 [Pipeline] timeout 00:02:16.386 Timeout set to expire in 1 hr 30 min 00:02:16.388 [Pipeline] { 00:02:16.403 [Pipeline] sh 00:02:16.687 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.257 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:17.271 [Pipeline] sh 00:02:17.556 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:17.832 [Pipeline] sh 00:02:18.116 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.415 [Pipeline] sh 00:02:18.697 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:18.958 ++ readlink -f spdk_repo 00:02:18.958 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.958 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.958 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.958 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.958 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.958 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.958 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.958 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:18.958 + cd /home/vagrant/spdk_repo 00:02:18.958 + source /etc/os-release 00:02:18.958 ++ NAME='Fedora Linux' 00:02:18.958 ++ VERSION='39 (Cloud Edition)' 00:02:18.958 ++ ID=fedora 00:02:18.958 ++ VERSION_ID=39 00:02:18.958 ++ VERSION_CODENAME= 00:02:18.958 ++ PLATFORM_ID=platform:f39 00:02:18.958 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:18.958 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.958 ++ LOGO=fedora-logo-icon 00:02:18.958 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:18.958 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.958 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:18.958 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.958 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.958 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.958 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:18.958 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.958 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:18.958 ++ SUPPORT_END=2024-11-12 00:02:18.958 ++ VARIANT='Cloud Edition' 00:02:18.958 ++ VARIANT_ID=cloud 00:02:18.958 + uname -a 00:02:18.958 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:18.958 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.529 Hugepages 00:02:19.529 node hugesize free / total 00:02:19.529 node0 1048576kB 0 / 0 00:02:19.529 node0 2048kB 0 / 0 00:02:19.529 00:02:19.529 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.529 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.529 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.529 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:19.529 + rm -f /tmp/spdk-ld-path 00:02:19.529 + source autorun-spdk.conf 00:02:19.529 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.529 ++ SPDK_RUN_ASAN=1 00:02:19.529 ++ SPDK_RUN_UBSAN=1 00:02:19.529 ++ SPDK_TEST_RAID=1 00:02:19.529 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:19.529 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.529 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.529 ++ RUN_NIGHTLY=1 00:02:19.529 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.529 + [[ -n '' ]] 00:02:19.529 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.795 + for M in /var/spdk/build-*-manifest.txt 00:02:19.795 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.795 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.795 + for M in /var/spdk/build-*-manifest.txt 00:02:19.795 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.795 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.795 + for M in /var/spdk/build-*-manifest.txt 00:02:19.795 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.795 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.795 ++ uname 00:02:19.795 + [[ Linux == \L\i\n\u\x ]] 00:02:19.795 + sudo dmesg -T 00:02:19.795 + sudo dmesg --clear 00:02:19.795 + dmesg_pid=6168 00:02:19.795 + [[ Fedora Linux == FreeBSD ]] 00:02:19.795 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.795 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.795 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.795 + sudo dmesg -Tw 00:02:19.795 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.795 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.795 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.795 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.795 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.795 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.795 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.795 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.795 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.795 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.795 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.795 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.094 22:46:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.094 22:46:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.094 22:46:58 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:20.094 22:46:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.094 22:46:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.094 22:46:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.094 22:46:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.094 22:46:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.094 22:46:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.094 22:46:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.094 22:46:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.094 22:46:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.094 22:46:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.094 22:46:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.094 22:46:59 -- paths/export.sh@5 -- $ export PATH 00:02:20.094 22:46:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.094 22:46:59 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.094 22:46:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:20.094 22:46:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732661219.XXXXXX 00:02:20.094 22:46:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732661219.mqj8vR 00:02:20.094 22:46:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:20.094 22:46:59 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:20.094 22:46:59 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.094 22:46:59 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:20.094 22:46:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.094 22:46:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.094 22:46:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:20.094 22:46:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:20.094 22:46:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.094 22:46:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:20.094 22:46:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:20.094 22:46:59 -- pm/common@17 -- $ local monitor 00:02:20.094 22:46:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.094 22:46:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.094 22:46:59 -- pm/common@25 -- $ sleep 1 00:02:20.094 22:46:59 -- pm/common@21 -- $ date +%s 00:02:20.094 22:46:59 -- pm/common@21 -- $ date +%s 00:02:20.094 22:46:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732661219 00:02:20.094 22:46:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732661219 00:02:20.094 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732661219_collect-cpu-load.pm.log 00:02:20.094 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732661219_collect-vmstat.pm.log 00:02:21.046 22:47:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:21.046 22:47:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.046 22:47:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.046 22:47:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.046 22:47:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.046 Tue Nov 26 10:47:00 PM UTC 2024 00:02:21.046 22:47:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.046 v25.01-pre-271-g2f2acf4eb 00:02:21.046 22:47:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.046 22:47:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.046 22:47:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.046 22:47:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.046 22:47:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.046 ************************************ 00:02:21.046 START TEST asan 00:02:21.046 ************************************ 00:02:21.046 using asan 00:02:21.046 22:47:00 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:21.046 00:02:21.046 real 0m0.001s 00:02:21.046 user 0m0.000s 00:02:21.046 sys 0m0.001s 00:02:21.046 22:47:00 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.046 22:47:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.046 ************************************ 00:02:21.046 END TEST asan 00:02:21.046 ************************************ 00:02:21.307 22:47:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.307 22:47:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.307 22:47:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.307 22:47:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.307 22:47:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.307 ************************************ 00:02:21.307 START TEST ubsan 00:02:21.307 ************************************ 00:02:21.307 using ubsan 00:02:21.307 22:47:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:21.307 00:02:21.307 real 0m0.000s 00:02:21.307 user 0m0.000s 00:02:21.307 sys 0m0.000s 00:02:21.307 22:47:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.307 22:47:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.307 ************************************ 00:02:21.307 END TEST ubsan 00:02:21.307 ************************************ 00:02:21.307 22:47:00 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:21.307 22:47:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.307 22:47:00 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.307 22:47:00 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:21.307 22:47:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.307 22:47:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.307 ************************************ 00:02:21.307 START TEST build_native_dpdk 00:02:21.307 ************************************ 00:02:21.307 22:47:00 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.307 f86085caab app/testpmd: avoid potential outside of array reference 00:02:21.307 4c2e746842 app/testpmd: remove redundant policy action condition 00:02:21.307 357f915ef5 test/eal: fix lcore check 00:02:21.307 b3e64fe596 test/eal: fix loop coverage for alignment macros 00:02:21.307 c6f484adf1 test/crypto: fix TLS zero length record check 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:21.307 22:47:00 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:21.308 patching file config/rte_config.h 00:02:21.308 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:21.308 22:47:00 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:21.308 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:21.569 22:47:00 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:21.569 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:21.570 22:47:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:21.570 patching file drivers/bus/pci/linux/pci_uio.c 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:21.570 22:47:00 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:28.151 The Meson build system 00:02:28.151 Version: 1.5.0 00:02:28.151 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:28.151 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:28.151 Build type: native build 00:02:28.151 Project name: DPDK 00:02:28.151 Project version: 24.11.0-rc3 00:02:28.151 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:28.151 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:28.151 Host machine cpu family: x86_64 00:02:28.151 Host machine cpu: x86_64 00:02:28.151 Message: ## Building in Developer Mode ## 00:02:28.151 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.151 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:28.151 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.151 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:28.151 Program cat found: YES (/usr/bin/cat) 00:02:28.151 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:28.151 Compiler for C supports arguments -march=native: YES 00:02:28.151 Checking for size of "void *" : 8 00:02:28.151 Checking for size of "void *" : 8 (cached) 00:02:28.151 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:28.151 Library m found: YES 00:02:28.151 Library numa found: YES 00:02:28.151 Has header "numaif.h" : YES 00:02:28.151 Library fdt found: NO 00:02:28.151 Library execinfo found: NO 00:02:28.151 Has header "execinfo.h" : YES 00:02:28.151 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:28.151 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.151 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.151 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.151 Run-time dependency openssl found: YES 3.1.1 00:02:28.151 Run-time dependency libpcap found: YES 1.10.4 00:02:28.151 Has header "pcap.h" with dependency libpcap: YES 00:02:28.151 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.151 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.151 Compiler for C supports arguments -Wformat: YES 00:02:28.151 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.151 Compiler for C supports arguments -Wformat-security: NO 00:02:28.151 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.151 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.151 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.151 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.151 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.151 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.151 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.151 Compiler for C supports arguments -Wundef: YES 00:02:28.151 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.151 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.151 Program objdump found: YES (/usr/bin/objdump) 00:02:28.151 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:28.151 Checking if "AVX512 checking" compiles: YES 00:02:28.151 Fetching value of define "__AVX512F__" : 1 00:02:28.151 Fetching value of define "__AVX512BW__" : 1 00:02:28.151 Fetching value of define "__AVX512DQ__" : 1 00:02:28.151 Fetching value of define "__AVX512VL__" : 1 00:02:28.151 Fetching value of define "__SSE4_2__" : 1 00:02:28.151 Fetching value of define "__AES__" : 1 00:02:28.151 Fetching value of define "__AVX__" : 1 00:02:28.151 Fetching value of define "__AVX2__" : 1 00:02:28.151 Fetching value of define "__AVX512BW__" : 1 00:02:28.151 Fetching value of define "__AVX512CD__" : 1 00:02:28.151 Fetching value of define "__AVX512DQ__" : 1 00:02:28.151 Fetching value of define "__AVX512F__" : 1 00:02:28.151 Fetching value of define "__AVX512VL__" : 1 00:02:28.151 Fetching value of define "__PCLMUL__" : 1 00:02:28.151 Fetching value of define "__RDRND__" : 1 00:02:28.151 Fetching value of define "__RDSEED__" : 1 00:02:28.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.151 Message: lib/log: Defining dependency "log" 00:02:28.151 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.151 Message: lib/argparse: Defining dependency "argparse" 00:02:28.151 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.151 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:28.151 Checking for function "getentropy" : NO 00:02:28.151 Message: lib/eal: Defining dependency "eal" 00:02:28.151 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:28.151 Message: lib/ring: Defining dependency "ring" 00:02:28.151 Message: lib/rcu: Defining dependency "rcu" 00:02:28.151 Message: lib/mempool: Defining dependency "mempool" 00:02:28.151 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:28.151 Compiler for C supports arguments -mpclmul: YES 00:02:28.151 Compiler for C supports arguments -maes: YES 00:02:28.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.151 Message: lib/net: Defining dependency "net" 00:02:28.151 Message: lib/meter: Defining dependency "meter" 00:02:28.151 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.151 Message: lib/pci: Defining dependency "pci" 00:02:28.151 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.151 Message: lib/metrics: Defining dependency "metrics" 00:02:28.151 Message: lib/hash: Defining dependency "hash" 00:02:28.151 Message: lib/timer: Defining dependency "timer" 00:02:28.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.151 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.151 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:28.151 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.151 Message: lib/acl: Defining dependency "acl" 00:02:28.151 Message: lib/bbdev: Defining dependency "bbdev" 00:02:28.151 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:28.151 Run-time dependency libelf found: YES 0.191 00:02:28.151 Message: lib/bpf: Defining dependency "bpf" 00:02:28.151 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:28.151 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.151 Message: lib/distributor: Defining dependency "distributor" 00:02:28.151 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.151 Message: lib/efd: Defining dependency "efd" 00:02:28.151 Message: lib/eventdev: Defining dependency "eventdev" 00:02:28.151 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:28.151 Message: lib/gpudev: Defining dependency "gpudev" 00:02:28.151 Message: lib/gro: Defining dependency "gro" 00:02:28.151 Message: lib/gso: Defining dependency "gso" 00:02:28.151 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:28.151 Message: lib/jobstats: Defining dependency "jobstats" 00:02:28.151 Message: lib/latencystats: Defining dependency "latencystats" 00:02:28.151 Message: lib/lpm: Defining dependency "lpm" 00:02:28.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.151 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.151 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:28.151 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:28.151 Message: lib/member: Defining dependency "member" 00:02:28.151 Message: lib/pcapng: Defining dependency "pcapng" 00:02:28.151 Message: lib/power: Defining dependency "power" 00:02:28.151 Message: lib/rawdev: Defining dependency "rawdev" 00:02:28.151 Message: lib/regexdev: Defining dependency "regexdev" 00:02:28.151 Message: lib/mldev: Defining dependency "mldev" 00:02:28.151 Message: lib/rib: Defining dependency "rib" 00:02:28.151 Message: lib/reorder: Defining dependency "reorder" 00:02:28.151 Message: lib/sched: Defining dependency "sched" 00:02:28.151 Message: lib/security: Defining dependency "security" 00:02:28.151 Message: lib/stack: Defining dependency "stack" 00:02:28.151 Has header "linux/userfaultfd.h" : YES 00:02:28.151 Has header "linux/vduse.h" : YES 00:02:28.151 Message: lib/vhost: Defining dependency "vhost" 00:02:28.151 Message: lib/ipsec: Defining dependency "ipsec" 00:02:28.151 Message: lib/pdcp: Defining dependency "pdcp" 00:02:28.151 Message: lib/fib: Defining dependency "fib" 00:02:28.151 Message: lib/port: Defining dependency "port" 00:02:28.151 Message: lib/pdump: Defining dependency "pdump" 00:02:28.151 Message: lib/table: Defining dependency "table" 00:02:28.151 Message: lib/pipeline: Defining dependency "pipeline" 00:02:28.151 Message: lib/graph: Defining dependency "graph" 00:02:28.151 Message: lib/node: Defining dependency "node" 00:02:28.152 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.152 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.152 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.152 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.152 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.152 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:28.152 Compiler for C supports arguments -Wno-unused-value: YES 00:02:28.152 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:28.152 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:28.152 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:28.152 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:28.152 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:28.152 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:28.152 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:28.152 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:28.152 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:28.152 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:28.152 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:28.152 Has header "sys/epoll.h" : YES 00:02:28.152 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.152 Configuring doxy-api-html.conf using configuration 00:02:28.152 Configuring doxy-api-man.conf using configuration 00:02:28.152 Program mandb found: YES (/usr/bin/mandb) 00:02:28.152 Program sphinx-build found: NO 00:02:28.152 Program sphinx-build found: NO 00:02:28.152 Configuring rte_build_config.h using configuration 00:02:28.152 Message: 00:02:28.152 ================= 00:02:28.152 Applications Enabled 00:02:28.152 ================= 00:02:28.152 00:02:28.152 apps: 00:02:28.152 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:28.152 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:28.152 test-pmd, test-regex, test-sad, test-security-perf, 00:02:28.152 00:02:28.152 Message: 00:02:28.152 ================= 00:02:28.152 Libraries Enabled 00:02:28.152 ================= 00:02:28.152 00:02:28.152 libs: 00:02:28.152 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:28.152 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:28.152 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:28.152 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:28.152 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:28.152 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:28.152 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:28.152 graph, node, 00:02:28.152 00:02:28.152 Message: 00:02:28.152 =============== 00:02:28.152 Drivers Enabled 00:02:28.152 =============== 00:02:28.152 00:02:28.152 common: 00:02:28.152 00:02:28.152 bus: 00:02:28.152 pci, vdev, 00:02:28.152 mempool: 00:02:28.152 ring, 00:02:28.152 dma: 00:02:28.152 00:02:28.152 net: 00:02:28.152 i40e, 00:02:28.152 raw: 00:02:28.152 00:02:28.152 crypto: 00:02:28.152 00:02:28.152 compress: 00:02:28.152 00:02:28.152 regex: 00:02:28.152 00:02:28.152 ml: 00:02:28.152 00:02:28.152 vdpa: 00:02:28.152 00:02:28.152 event: 00:02:28.152 00:02:28.152 baseband: 00:02:28.152 00:02:28.152 gpu: 00:02:28.152 00:02:28.152 power: 00:02:28.152 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:28.152 00:02:28.152 Message: 00:02:28.152 ================= 00:02:28.152 Content Skipped 00:02:28.152 ================= 00:02:28.152 00:02:28.152 apps: 00:02:28.152 00:02:28.152 libs: 00:02:28.152 00:02:28.152 drivers: 00:02:28.152 common/cpt: not in enabled drivers build config 00:02:28.152 common/dpaax: not in enabled drivers build config 00:02:28.152 common/iavf: not in enabled drivers build config 00:02:28.152 common/idpf: not in enabled drivers build config 00:02:28.152 common/ionic: not in enabled drivers build config 00:02:28.152 common/mvep: not in enabled drivers build config 00:02:28.152 common/octeontx: not in enabled drivers build config 00:02:28.152 bus/auxiliary: not in enabled drivers build config 00:02:28.152 bus/cdx: not in enabled drivers build config 00:02:28.152 bus/dpaa: not in enabled drivers build config 00:02:28.152 bus/fslmc: not in enabled drivers build config 00:02:28.152 bus/ifpga: not in enabled drivers build config 00:02:28.152 bus/platform: not in enabled drivers build config 00:02:28.152 bus/uacce: not in enabled drivers build config 00:02:28.152 bus/vmbus: not in enabled drivers build config 00:02:28.152 common/cnxk: not in enabled drivers build config 00:02:28.152 common/mlx5: not in enabled drivers build config 00:02:28.152 common/nfp: not in enabled drivers build config 00:02:28.152 common/nitrox: not in enabled drivers build config 00:02:28.152 common/qat: not in enabled drivers build config 00:02:28.152 common/sfc_efx: not in enabled drivers build config 00:02:28.152 mempool/bucket: not in enabled drivers build config 00:02:28.152 mempool/cnxk: not in enabled drivers build config 00:02:28.152 mempool/dpaa: not in enabled drivers build config 00:02:28.152 mempool/dpaa2: not in enabled drivers build config 00:02:28.152 mempool/octeontx: not in enabled drivers build config 00:02:28.152 mempool/stack: not in enabled drivers build config 00:02:28.152 dma/cnxk: not in enabled drivers build config 00:02:28.152 dma/dpaa: not in enabled drivers build config 00:02:28.152 dma/dpaa2: not in enabled drivers build config 00:02:28.152 dma/hisilicon: not in enabled drivers build config 00:02:28.152 dma/idxd: not in enabled drivers build config 00:02:28.152 dma/ioat: not in enabled drivers build config 00:02:28.152 dma/odm: not in enabled drivers build config 00:02:28.152 dma/skeleton: not in enabled drivers build config 00:02:28.152 net/af_packet: not in enabled drivers build config 00:02:28.152 net/af_xdp: not in enabled drivers build config 00:02:28.152 net/ark: not in enabled drivers build config 00:02:28.152 net/atlantic: not in enabled drivers build config 00:02:28.152 net/avp: not in enabled drivers build config 00:02:28.152 net/axgbe: not in enabled drivers build config 00:02:28.152 net/bnx2x: not in enabled drivers build config 00:02:28.152 net/bnxt: not in enabled drivers build config 00:02:28.152 net/bonding: not in enabled drivers build config 00:02:28.152 net/cnxk: not in enabled drivers build config 00:02:28.152 net/cpfl: not in enabled drivers build config 00:02:28.152 net/cxgbe: not in enabled drivers build config 00:02:28.152 net/dpaa: not in enabled drivers build config 00:02:28.152 net/dpaa2: not in enabled drivers build config 00:02:28.152 net/e1000: not in enabled drivers build config 00:02:28.152 net/ena: not in enabled drivers build config 00:02:28.152 net/enetc: not in enabled drivers build config 00:02:28.152 net/enetfec: not in enabled drivers build config 00:02:28.152 net/enic: not in enabled drivers build config 00:02:28.152 net/failsafe: not in enabled drivers build config 00:02:28.152 net/fm10k: not in enabled drivers build config 00:02:28.152 net/gve: not in enabled drivers build config 00:02:28.152 net/hinic: not in enabled drivers build config 00:02:28.152 net/hns3: not in enabled drivers build config 00:02:28.152 net/iavf: not in enabled drivers build config 00:02:28.152 net/ice: not in enabled drivers build config 00:02:28.152 net/idpf: not in enabled drivers build config 00:02:28.152 net/igc: not in enabled drivers build config 00:02:28.152 net/ionic: not in enabled drivers build config 00:02:28.152 net/ipn3ke: not in enabled drivers build config 00:02:28.152 net/ixgbe: not in enabled drivers build config 00:02:28.152 net/mana: not in enabled drivers build config 00:02:28.152 net/memif: not in enabled drivers build config 00:02:28.152 net/mlx4: not in enabled drivers build config 00:02:28.152 net/mlx5: not in enabled drivers build config 00:02:28.152 net/mvneta: not in enabled drivers build config 00:02:28.152 net/mvpp2: not in enabled drivers build config 00:02:28.152 net/netvsc: not in enabled drivers build config 00:02:28.152 net/nfb: not in enabled drivers build config 00:02:28.152 net/nfp: not in enabled drivers build config 00:02:28.152 net/ngbe: not in enabled drivers build config 00:02:28.152 net/ntnic: not in enabled drivers build config 00:02:28.152 net/null: not in enabled drivers build config 00:02:28.152 net/octeontx: not in enabled drivers build config 00:02:28.152 net/octeon_ep: not in enabled drivers build config 00:02:28.152 net/pcap: not in enabled drivers build config 00:02:28.152 net/pfe: not in enabled drivers build config 00:02:28.152 net/qede: not in enabled drivers build config 00:02:28.152 net/r8169: not in enabled drivers build config 00:02:28.152 net/ring: not in enabled drivers build config 00:02:28.152 net/sfc: not in enabled drivers build config 00:02:28.152 net/softnic: not in enabled drivers build config 00:02:28.152 net/tap: not in enabled drivers build config 00:02:28.152 net/thunderx: not in enabled drivers build config 00:02:28.152 net/txgbe: not in enabled drivers build config 00:02:28.152 net/vdev_netvsc: not in enabled drivers build config 00:02:28.152 net/vhost: not in enabled drivers build config 00:02:28.152 net/virtio: not in enabled drivers build config 00:02:28.152 net/vmxnet3: not in enabled drivers build config 00:02:28.152 net/zxdh: not in enabled drivers build config 00:02:28.152 raw/cnxk_bphy: not in enabled drivers build config 00:02:28.152 raw/cnxk_gpio: not in enabled drivers build config 00:02:28.152 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:28.152 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:28.152 raw/gdtc: not in enabled drivers build config 00:02:28.152 raw/ifpga: not in enabled drivers build config 00:02:28.152 raw/ntb: not in enabled drivers build config 00:02:28.152 raw/skeleton: not in enabled drivers build config 00:02:28.152 crypto/armv8: not in enabled drivers build config 00:02:28.152 crypto/bcmfs: not in enabled drivers build config 00:02:28.152 crypto/caam_jr: not in enabled drivers build config 00:02:28.152 crypto/ccp: not in enabled drivers build config 00:02:28.152 crypto/cnxk: not in enabled drivers build config 00:02:28.152 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.152 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.152 crypto/ionic: not in enabled drivers build config 00:02:28.152 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.153 crypto/mlx5: not in enabled drivers build config 00:02:28.153 crypto/mvsam: not in enabled drivers build config 00:02:28.153 crypto/nitrox: not in enabled drivers build config 00:02:28.153 crypto/null: not in enabled drivers build config 00:02:28.153 crypto/octeontx: not in enabled drivers build config 00:02:28.153 crypto/openssl: not in enabled drivers build config 00:02:28.153 crypto/scheduler: not in enabled drivers build config 00:02:28.153 crypto/uadk: not in enabled drivers build config 00:02:28.153 crypto/virtio: not in enabled drivers build config 00:02:28.153 compress/isal: not in enabled drivers build config 00:02:28.153 compress/mlx5: not in enabled drivers build config 00:02:28.153 compress/nitrox: not in enabled drivers build config 00:02:28.153 compress/octeontx: not in enabled drivers build config 00:02:28.153 compress/uadk: not in enabled drivers build config 00:02:28.153 compress/zlib: not in enabled drivers build config 00:02:28.153 regex/mlx5: not in enabled drivers build config 00:02:28.153 regex/cn9k: not in enabled drivers build config 00:02:28.153 ml/cnxk: not in enabled drivers build config 00:02:28.153 vdpa/ifc: not in enabled drivers build config 00:02:28.153 vdpa/mlx5: not in enabled drivers build config 00:02:28.153 vdpa/nfp: not in enabled drivers build config 00:02:28.153 vdpa/sfc: not in enabled drivers build config 00:02:28.153 event/cnxk: not in enabled drivers build config 00:02:28.153 event/dlb2: not in enabled drivers build config 00:02:28.153 event/dpaa: not in enabled drivers build config 00:02:28.153 event/dpaa2: not in enabled drivers build config 00:02:28.153 event/dsw: not in enabled drivers build config 00:02:28.153 event/opdl: not in enabled drivers build config 00:02:28.153 event/skeleton: not in enabled drivers build config 00:02:28.153 event/sw: not in enabled drivers build config 00:02:28.153 event/octeontx: not in enabled drivers build config 00:02:28.153 baseband/acc: not in enabled drivers build config 00:02:28.153 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:28.153 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:28.153 baseband/la12xx: not in enabled drivers build config 00:02:28.153 baseband/null: not in enabled drivers build config 00:02:28.153 baseband/turbo_sw: not in enabled drivers build config 00:02:28.153 gpu/cuda: not in enabled drivers build config 00:02:28.153 power/amd_uncore: not in enabled drivers build config 00:02:28.153 00:02:28.153 00:02:28.153 Message: DPDK build config complete: 00:02:28.153 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:28.153 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:28.153 Build targets in project: 246 00:02:28.153 00:02:28.153 DPDK 24.11.0-rc3 00:02:28.153 00:02:28.153 User defined options 00:02:28.153 libdir : lib 00:02:28.153 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:28.153 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:28.153 c_link_args : 00:02:28.153 enable_docs : false 00:02:28.153 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:28.153 enable_kmods : false 00:02:29.092 machine : native 00:02:29.092 tests : false 00:02:29.092 00:02:29.092 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.092 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:29.092 22:47:07 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:29.092 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.092 [1/766] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:29.092 [2/766] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:29.092 [3/766] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:29.092 [4/766] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:29.092 [5/766] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.092 [6/766] Linking static target lib/librte_kvargs.a 00:02:29.092 [7/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.352 [8/766] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.352 [9/766] Linking static target lib/librte_log.a 00:02:29.352 [10/766] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:29.352 [11/766] Linking static target lib/librte_argparse.a 00:02:29.352 [12/766] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.352 [13/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.352 [14/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.352 [15/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.352 [16/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.352 [17/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.613 [18/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.613 [19/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.613 [20/766] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.613 [21/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.613 [22/766] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.613 [23/766] Linking target lib/librte_log.so.25.0 00:02:29.873 [24/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:29.873 [25/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.873 [26/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:29.873 [27/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:29.873 [28/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:29.873 [29/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.873 [30/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:29.873 [31/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.873 [32/766] Linking static target lib/librte_telemetry.a 00:02:30.133 [33/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.133 [34/766] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:30.133 [35/766] Linking target lib/librte_kvargs.so.25.0 00:02:30.133 [36/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.133 [37/766] Linking target lib/librte_argparse.so.25.0 00:02:30.133 [38/766] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:30.133 [39/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.133 [40/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.133 [41/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.392 [42/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.392 [43/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.392 [44/766] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.392 [45/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.392 [46/766] Linking target lib/librte_telemetry.so.25.0 00:02:30.392 [47/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.392 [48/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.392 [49/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.392 [50/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:30.392 [51/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.392 [52/766] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:30.392 [53/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.651 [54/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.651 [55/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.651 [56/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:30.910 [57/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.910 [58/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.910 [59/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.910 [60/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.910 [61/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.910 [62/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.910 [63/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.910 [64/766] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.170 [65/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.170 [66/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.170 [67/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.170 [68/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.170 [69/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.170 [70/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.170 [71/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.170 [72/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.170 [73/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.170 [74/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.430 [75/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.430 [76/766] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.430 [77/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.430 [78/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.690 [79/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.690 [80/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.690 [81/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.690 [82/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.690 [83/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.690 [84/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.690 [85/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.690 [86/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.949 [87/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.949 [88/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.949 [89/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.949 [90/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:31.949 [91/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.949 [92/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.949 [93/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.949 [94/766] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.949 [95/766] Linking static target lib/librte_ring.a 00:02:32.207 [96/766] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.207 [97/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.207 [98/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.207 [99/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.207 [100/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.207 [101/766] Linking static target lib/librte_eal.a 00:02:32.207 [102/766] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.473 [103/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.473 [104/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.473 [105/766] Linking static target lib/librte_mempool.a 00:02:32.473 [106/766] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.473 [107/766] Linking static target lib/librte_rcu.a 00:02:32.743 [108/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.743 [109/766] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.743 [110/766] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.743 [111/766] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.743 [112/766] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.743 [113/766] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.743 [114/766] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.743 [115/766] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.002 [116/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.002 [117/766] Linking static target lib/librte_mbuf.a 00:02:33.002 [118/766] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.002 [119/766] Linking static target lib/librte_net.a 00:02:33.002 [120/766] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.002 [121/766] Linking static target lib/librte_meter.a 00:02:33.002 [122/766] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.002 [123/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.261 [124/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.261 [125/766] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.261 [126/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.261 [127/766] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.261 [128/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.520 [129/766] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.520 [130/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.520 [131/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.088 [132/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.088 [133/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.088 [134/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.088 [135/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.088 [136/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.088 [137/766] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.088 [138/766] Linking static target lib/librte_pci.a 00:02:34.088 [139/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.088 [140/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.088 [141/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.088 [142/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.347 [143/766] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.347 [144/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.347 [145/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.347 [146/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.347 [147/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.347 [148/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.347 [149/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.347 [150/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.347 [151/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.347 [152/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.606 [153/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.606 [154/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.606 [155/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.606 [156/766] Linking static target lib/librte_cmdline.a 00:02:34.865 [157/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:34.865 [158/766] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.865 [159/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:34.865 [160/766] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.865 [161/766] Linking static target lib/librte_metrics.a 00:02:34.865 [162/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.124 [163/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.124 [164/766] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.124 [165/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:35.382 [166/766] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.382 [167/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.382 [168/766] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.382 [169/766] Linking static target lib/librte_timer.a 00:02:35.641 [170/766] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:35.641 [171/766] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:35.641 [172/766] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.641 [173/766] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:35.900 [174/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:36.159 [175/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:36.159 [176/766] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:36.159 [177/766] Linking static target lib/librte_bitratestats.a 00:02:36.418 [178/766] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:36.418 [179/766] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:36.418 [180/766] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.418 [181/766] Linking static target lib/librte_bbdev.a 00:02:36.418 [182/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:36.677 [183/766] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.677 [184/766] Linking static target lib/librte_hash.a 00:02:36.678 [185/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:36.678 [186/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.937 [187/766] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.937 [188/766] Linking static target lib/librte_ethdev.a 00:02:36.937 [189/766] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:36.937 [190/766] Linking static target lib/acl/libavx2_tmp.a 00:02:36.937 [191/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:36.937 [192/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:37.197 [193/766] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.197 [194/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:37.197 [195/766] Linking target lib/librte_eal.so.25.0 00:02:37.197 [196/766] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.197 [197/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:37.197 [198/766] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:37.457 [199/766] Linking target lib/librte_ring.so.25.0 00:02:37.457 [200/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:37.457 [201/766] Linking target lib/librte_meter.so.25.0 00:02:37.457 [202/766] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:37.457 [203/766] Linking target lib/librte_pci.so.25.0 00:02:37.457 [204/766] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:37.457 [205/766] Linking target lib/librte_rcu.so.25.0 00:02:37.457 [206/766] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:37.457 [207/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:37.457 [208/766] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:37.457 [209/766] Linking static target lib/librte_cfgfile.a 00:02:37.457 [210/766] Linking target lib/librte_timer.so.25.0 00:02:37.457 [211/766] Linking target lib/librte_mempool.so.25.0 00:02:37.457 [212/766] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:37.717 [213/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:37.717 [214/766] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:37.717 [215/766] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:37.717 [216/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.717 [217/766] Linking target lib/librte_mbuf.so.25.0 00:02:37.718 [218/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.718 [219/766] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:37.718 [220/766] Linking target lib/librte_net.so.25.0 00:02:37.718 [221/766] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.977 [222/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:37.977 [223/766] Linking static target lib/librte_bpf.a 00:02:37.977 [224/766] Linking target lib/librte_bbdev.so.25.0 00:02:37.977 [225/766] Linking target lib/librte_cfgfile.so.25.0 00:02:37.977 [226/766] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:37.977 [227/766] Linking target lib/librte_cmdline.so.25.0 00:02:37.977 [228/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.977 [229/766] Linking target lib/librte_hash.so.25.0 00:02:37.977 [230/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.977 [231/766] Linking static target lib/librte_compressdev.a 00:02:38.237 [232/766] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:38.237 [233/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:38.237 [234/766] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.237 [235/766] Linking static target lib/librte_acl.a 00:02:38.237 [236/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.237 [237/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:38.237 [238/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:38.237 [239/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:38.497 [240/766] Linking static target lib/librte_distributor.a 00:02:38.498 [241/766] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.498 [242/766] Linking target lib/librte_acl.so.25.0 00:02:38.498 [243/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.498 [244/766] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.498 [245/766] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:38.498 [246/766] Linking target lib/librte_compressdev.so.25.0 00:02:38.498 [247/766] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.757 [248/766] Linking target lib/librte_distributor.so.25.0 00:02:38.757 [249/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:38.757 [250/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.757 [251/766] Linking static target lib/librte_dmadev.a 00:02:39.017 [252/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:39.017 [253/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:39.277 [254/766] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.277 [255/766] Linking target lib/librte_dmadev.so.25.0 00:02:39.277 [256/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:39.277 [257/766] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:39.277 [258/766] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:39.277 [259/766] Linking static target lib/librte_efd.a 00:02:39.537 [260/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:39.537 [261/766] Linking static target lib/librte_cryptodev.a 00:02:39.537 [262/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:39.537 [263/766] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.537 [264/766] Linking target lib/librte_efd.so.25.0 00:02:39.797 [265/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:39.797 [266/766] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:39.797 [267/766] Linking static target lib/librte_dispatcher.a 00:02:39.797 [268/766] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.797 [269/766] Linking static target lib/librte_gpudev.a 00:02:40.056 [270/766] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:40.056 [271/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:40.056 [272/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:40.315 [273/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:40.315 [274/766] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.315 [275/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:40.575 [276/766] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.575 [277/766] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:40.575 [278/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:40.575 [279/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:40.575 [280/766] Linking static target lib/librte_gro.a 00:02:40.575 [281/766] Linking target lib/librte_cryptodev.so.25.0 00:02:40.575 [282/766] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.575 [283/766] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:40.575 [284/766] Linking target lib/librte_gpudev.so.25.0 00:02:40.834 [285/766] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:40.834 [286/766] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:40.834 [287/766] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.834 [288/766] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.834 [289/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.834 [290/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:40.834 [291/766] Linking static target lib/librte_eventdev.a 00:02:41.094 [292/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:41.094 [293/766] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:41.094 [294/766] Linking static target lib/librte_gso.a 00:02:41.094 [295/766] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.094 [296/766] Linking target lib/librte_ethdev.so.25.0 00:02:41.094 [297/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:41.094 [298/766] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.094 [299/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:41.356 [300/766] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:41.356 [301/766] Linking target lib/librte_metrics.so.25.0 00:02:41.356 [302/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:41.356 [303/766] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:41.356 [304/766] Linking target lib/librte_bpf.so.25.0 00:02:41.356 [305/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:41.356 [306/766] Linking target lib/librte_gro.so.25.0 00:02:41.356 [307/766] Linking static target lib/librte_jobstats.a 00:02:41.356 [308/766] Linking target lib/librte_gso.so.25.0 00:02:41.356 [309/766] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:41.356 [310/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:41.356 [311/766] Linking target lib/librte_bitratestats.so.25.0 00:02:41.356 [312/766] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:41.356 [313/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:41.356 [314/766] Linking static target lib/librte_ip_frag.a 00:02:41.615 [315/766] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:41.615 [316/766] Linking static target lib/librte_latencystats.a 00:02:41.615 [317/766] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.615 [318/766] Linking target lib/librte_jobstats.so.25.0 00:02:41.615 [319/766] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.615 [320/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:41.876 [321/766] Linking target lib/librte_ip_frag.so.25.0 00:02:41.876 [322/766] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.876 [323/766] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:41.876 [324/766] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:41.876 [325/766] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:41.876 [326/766] Linking target lib/librte_latencystats.so.25.0 00:02:41.876 [327/766] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:41.876 [328/766] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.876 [329/766] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:41.876 [330/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:41.876 [331/766] Linking static target lib/librte_lpm.a 00:02:42.136 [332/766] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:42.136 [333/766] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:42.136 [334/766] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.396 [335/766] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.396 [336/766] Linking target lib/librte_lpm.so.25.0 00:02:42.396 [337/766] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.396 [338/766] Linking static target lib/librte_power.a 00:02:42.396 [339/766] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:42.396 [340/766] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:42.396 [341/766] Linking static target lib/librte_pcapng.a 00:02:42.396 [342/766] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:42.396 [343/766] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:42.396 [344/766] Linking static target lib/librte_rawdev.a 00:02:42.656 [345/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:42.656 [346/766] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.656 [347/766] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:42.656 [348/766] Linking static target lib/librte_regexdev.a 00:02:42.656 [349/766] Linking target lib/librte_pcapng.so.25.0 00:02:42.656 [350/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:42.656 [351/766] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.656 [352/766] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:42.916 [353/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:42.916 [354/766] Linking target lib/librte_eventdev.so.25.0 00:02:42.916 [355/766] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.916 [356/766] Linking target lib/librte_rawdev.so.25.0 00:02:42.916 [357/766] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:42.916 [358/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:42.916 [359/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:42.916 [360/766] Linking static target lib/librte_mldev.a 00:02:42.916 [361/766] Linking target lib/librte_dispatcher.so.25.0 00:02:42.916 [362/766] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:42.916 [363/766] Linking static target lib/librte_member.a 00:02:43.176 [364/766] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.176 [365/766] Linking target lib/librte_power.so.25.0 00:02:43.176 [366/766] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:43.176 [367/766] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:02:43.176 [368/766] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.176 [369/766] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:43.176 [370/766] Linking target lib/librte_regexdev.so.25.0 00:02:43.176 [371/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:43.434 [372/766] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.434 [373/766] Linking target lib/librte_member.so.25.0 00:02:43.434 [374/766] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.434 [375/766] Linking static target lib/librte_reorder.a 00:02:43.434 [376/766] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:43.434 [377/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:43.434 [378/766] Linking static target lib/librte_rib.a 00:02:43.694 [379/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:43.694 [380/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:43.694 [381/766] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.694 [382/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.694 [383/766] Linking static target lib/librte_stack.a 00:02:43.694 [384/766] Linking target lib/librte_reorder.so.25.0 00:02:43.694 [385/766] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.694 [386/766] Linking static target lib/librte_security.a 00:02:43.694 [387/766] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.694 [388/766] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.694 [389/766] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:43.694 [390/766] Linking target lib/librte_rib.so.25.0 00:02:43.953 [391/766] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.953 [392/766] Linking target lib/librte_stack.so.25.0 00:02:43.953 [393/766] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.953 [394/766] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:43.953 [395/766] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.213 [396/766] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.213 [397/766] Linking target lib/librte_security.so.25.0 00:02:44.213 [398/766] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.213 [399/766] Linking target lib/librte_mldev.so.25.0 00:02:44.213 [400/766] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:44.213 [401/766] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:44.213 [402/766] Linking static target lib/librte_sched.a 00:02:44.213 [403/766] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.473 [404/766] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.473 [405/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.473 [406/766] Linking target lib/librte_sched.so.25.0 00:02:44.733 [407/766] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:44.733 [408/766] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.733 [409/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.993 [410/766] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:44.993 [411/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:44.993 [412/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.252 [413/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:45.252 [414/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:45.252 [415/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:45.512 [416/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:45.512 [417/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:45.512 [418/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:45.771 [419/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:45.771 [420/766] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:45.771 [421/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:45.771 [422/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:46.030 [423/766] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:46.030 [424/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:46.030 [425/766] Linking static target lib/librte_ipsec.a 00:02:46.290 [426/766] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.290 [427/766] Linking target lib/librte_ipsec.so.25.0 00:02:46.290 [428/766] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:46.290 [429/766] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:46.290 [430/766] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:46.548 [431/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:46.548 [432/766] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:46.548 [433/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:46.548 [434/766] Linking static target lib/librte_pdcp.a 00:02:46.808 [435/766] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:46.808 [436/766] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:46.808 [437/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:46.808 [438/766] Linking static target lib/librte_fib.a 00:02:47.067 [439/766] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.067 [440/766] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:47.067 [441/766] Linking target lib/librte_pdcp.so.25.0 00:02:47.067 [442/766] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:47.327 [443/766] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.327 [444/766] Linking target lib/librte_fib.so.25.0 00:02:47.327 [445/766] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:47.327 [446/766] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:47.586 [447/766] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:47.586 [448/766] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:47.586 [449/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:47.586 [450/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:47.846 [451/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:48.105 [452/766] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:48.105 [453/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:48.105 [454/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:48.105 [455/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:48.105 [456/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:48.105 [457/766] Linking static target lib/librte_port.a 00:02:48.105 [458/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:48.105 [459/766] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:48.105 [460/766] Linking static target lib/librte_pdump.a 00:02:48.364 [461/766] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:48.364 [462/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:48.364 [463/766] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.364 [464/766] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.364 [465/766] Linking target lib/librte_pdump.so.25.0 00:02:48.623 [466/766] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:48.623 [467/766] Linking target lib/librte_port.so.25.0 00:02:48.623 [468/766] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:48.883 [469/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:48.883 [470/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:48.883 [471/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:48.883 [472/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:48.883 [473/766] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:48.883 [474/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:48.883 [475/766] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:49.452 [476/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:49.452 [477/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:49.452 [478/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:49.452 [479/766] Linking static target lib/librte_table.a 00:02:49.712 [480/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:49.712 [481/766] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:49.971 [482/766] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.971 [483/766] Linking target lib/librte_table.so.25.0 00:02:49.972 [484/766] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:50.230 [485/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:50.230 [486/766] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:50.230 [487/766] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:50.489 [488/766] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:50.489 [489/766] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:50.748 [490/766] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:50.748 [491/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:50.748 [492/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.748 [493/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:50.748 [494/766] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:50.748 [495/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:51.008 [496/766] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:51.268 [497/766] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:51.268 [498/766] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:51.268 [499/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:51.268 [500/766] Linking static target lib/librte_graph.a 00:02:51.268 [501/766] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:51.268 [502/766] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:51.838 [503/766] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.839 [504/766] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:51.839 [505/766] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:51.839 [506/766] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:51.839 [507/766] Linking target lib/librte_graph.so.25.0 00:02:51.839 [508/766] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:52.098 [509/766] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:52.098 [510/766] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:52.098 [511/766] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:52.098 [512/766] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:52.098 [513/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.098 [514/766] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:52.366 [515/766] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:52.366 [516/766] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:52.366 [517/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.633 [518/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.633 [519/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.633 [520/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.633 [521/766] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:52.633 [522/766] Linking static target lib/librte_node.a 00:02:52.633 [523/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.893 [524/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.893 [525/766] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.893 [526/766] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.893 [527/766] Linking target lib/librte_node.so.25.0 00:02:53.153 [528/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.153 [529/766] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.153 [530/766] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.153 [531/766] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.153 [532/766] Linking static target drivers/librte_bus_vdev.a 00:02:53.153 [533/766] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.153 [534/766] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.153 [535/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:53.153 [536/766] Linking static target drivers/librte_bus_pci.a 00:02:53.153 [537/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:53.153 [538/766] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.153 [539/766] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.413 [540/766] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.413 [541/766] Linking target drivers/librte_bus_vdev.so.25.0 00:02:53.413 [542/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:53.413 [543/766] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.413 [544/766] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.413 [545/766] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:53.673 [546/766] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.673 [547/766] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.673 [548/766] Linking static target drivers/librte_mempool_ring.a 00:02:53.673 [549/766] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.673 [550/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:53.673 [551/766] Linking target drivers/librte_mempool_ring.so.25.0 00:02:53.673 [552/766] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.673 [553/766] Linking target drivers/librte_bus_pci.so.25.0 00:02:53.934 [554/766] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:53.934 [555/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:54.196 [556/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:54.457 [557/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:54.457 [558/766] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:54.717 [559/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:54.977 [560/766] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:54.977 [561/766] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:55.238 [562/766] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:55.238 [563/766] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:55.498 [564/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:55.498 [565/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:55.498 [566/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:55.498 [567/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:55.758 [568/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:56.018 [569/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:56.018 [570/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:56.018 [571/766] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:02:56.018 [572/766] Linking static target drivers/libtmp_rte_power_acpi.a 00:02:56.277 [573/766] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:02:56.277 [574/766] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:02:56.277 [575/766] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:02:56.277 [576/766] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:56.277 [577/766] Linking static target drivers/librte_power_acpi.a 00:02:56.277 [578/766] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:56.277 [579/766] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:02:56.277 [580/766] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:56.277 [581/766] Linking static target drivers/librte_power_amd_pstate.a 00:02:56.277 [582/766] Linking target drivers/librte_power_acpi.so.25.0 00:02:56.277 [583/766] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:56.538 [584/766] Linking target drivers/librte_power_amd_pstate.so.25.0 00:02:56.538 [585/766] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:02:56.538 [586/766] Linking static target drivers/libtmp_rte_power_cppc.a 00:02:56.538 [587/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:02:56.538 [588/766] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:02:56.538 [589/766] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:02:56.538 [590/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:02:56.538 [591/766] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:02:56.538 [592/766] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:02:56.538 [593/766] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:56.538 [594/766] Linking static target drivers/librte_power_cppc.a 00:02:56.798 [595/766] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:56.798 [596/766] Linking target drivers/librte_power_cppc.so.25.0 00:02:56.798 [597/766] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:02:56.798 [598/766] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:02:56.798 [599/766] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:56.798 [600/766] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:56.798 [601/766] Linking static target drivers/librte_power_intel_pstate.a 00:02:56.798 [602/766] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:56.798 [603/766] Linking static target drivers/librte_power_kvm_vm.a 00:02:56.798 [604/766] Linking target drivers/librte_power_intel_pstate.so.25.0 00:02:56.798 [605/766] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:56.798 [606/766] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:02:56.798 [607/766] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:02:57.058 [608/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:57.058 [609/766] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:57.058 [610/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:57.058 [611/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:57.058 [612/766] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.058 [613/766] Linking target drivers/librte_power_kvm_vm.so.25.0 00:02:57.058 [614/766] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:02:57.058 [615/766] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:57.058 [616/766] Linking static target drivers/librte_power_intel_uncore.a 00:02:57.058 [617/766] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:57.058 [618/766] Linking target drivers/librte_power_intel_uncore.so.25.0 00:02:57.318 [619/766] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:57.318 [620/766] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:57.318 [621/766] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:57.579 [622/766] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.579 [623/766] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:57.579 [624/766] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:57.839 [625/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:57.839 [626/766] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:57.839 [627/766] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:57.839 [628/766] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:57.839 [629/766] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:57.839 [630/766] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:58.099 [631/766] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.099 [632/766] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:58.099 [633/766] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:58.099 [634/766] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.099 [635/766] Linking static target drivers/librte_net_i40e.a 00:02:58.099 [636/766] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:58.099 [637/766] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.099 [638/766] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:58.359 [639/766] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:58.359 [640/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.359 [641/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:58.359 [642/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.619 [643/766] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:58.619 [644/766] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.619 [645/766] Linking target drivers/librte_net_i40e.so.25.0 00:02:58.879 [646/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:58.879 [647/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:58.879 [648/766] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:59.138 [649/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.396 [650/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:59.396 [651/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:59.396 [652/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:59.396 [653/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:59.656 [654/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:59.656 [655/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:59.916 [656/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.916 [657/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:00.176 [658/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:00.176 [659/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:00.176 [660/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:00.176 [661/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:00.176 [662/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:00.176 [663/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:00.435 [664/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:00.435 [665/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:00.695 [666/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.695 [667/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:00.695 [668/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:00.695 [669/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:00.955 [670/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.955 [671/766] Linking static target lib/librte_vhost.a 00:03:01.215 [672/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:01.215 [673/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:01.215 [674/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:01.215 [675/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.155 [676/766] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.155 [677/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:02.155 [678/766] Linking target lib/librte_vhost.so.25.0 00:03:02.155 [679/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:02.155 [680/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:02.155 [681/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:02.155 [682/766] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:02.155 [683/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:02.415 [684/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:02.415 [685/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:02.415 [686/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:02.415 [687/766] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:02.415 [688/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:02.686 [689/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:02.686 [690/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:02.686 [691/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:02.972 [692/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:02.972 [693/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:02.972 [694/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:02.972 [695/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:02.972 [696/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:02.972 [697/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:03.252 [698/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:03.252 [699/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:03.252 [700/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:03.513 [701/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:03.513 [702/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:03.513 [703/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:03.513 [704/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:03.513 [705/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:03.513 [706/766] Linking static target lib/librte_pipeline.a 00:03:03.513 [707/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:03.773 [708/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:03.773 [709/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:03.773 [710/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:04.033 [711/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.033 [712/766] Linking target app/dpdk-dumpcap 00:03:04.033 [713/766] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:04.033 [714/766] Linking target app/dpdk-pdump 00:03:04.294 [715/766] Linking target app/dpdk-graph 00:03:04.294 [716/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:04.294 [717/766] Linking target app/dpdk-proc-info 00:03:04.554 [718/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:04.554 [719/766] Linking target app/dpdk-test-acl 00:03:04.554 [720/766] Linking target app/dpdk-test-bbdev 00:03:04.554 [721/766] Linking target app/dpdk-test-cmdline 00:03:04.554 [722/766] Linking target app/dpdk-test-compress-perf 00:03:04.554 [723/766] Linking target app/dpdk-test-crypto-perf 00:03:04.814 [724/766] Linking target app/dpdk-test-dma-perf 00:03:04.814 [725/766] Linking target app/dpdk-test-eventdev 00:03:04.814 [726/766] Linking target app/dpdk-test-fib 00:03:04.814 [727/766] Linking target app/dpdk-test-gpudev 00:03:04.814 [728/766] Linking target app/dpdk-test-flow-perf 00:03:04.814 [729/766] Linking target app/dpdk-test-pipeline 00:03:05.075 [730/766] Linking target app/dpdk-test-mldev 00:03:05.075 [731/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:05.335 [732/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:05.596 [733/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:05.596 [734/766] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:05.596 [735/766] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:05.596 [736/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:05.596 [737/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:05.857 [738/766] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:05.857 [739/766] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:06.117 [740/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:06.117 [741/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:06.117 [742/766] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:06.117 [743/766] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.377 [744/766] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:06.377 [745/766] Linking target lib/librte_pipeline.so.25.0 00:03:06.377 [746/766] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:06.377 [747/766] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:06.377 [748/766] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:06.637 [749/766] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:06.897 [750/766] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:06.897 [751/766] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:07.158 [752/766] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:07.158 [753/766] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:07.158 [754/766] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:07.418 [755/766] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:07.418 [756/766] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:07.418 [757/766] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:07.418 [758/766] Linking target app/dpdk-test-sad 00:03:07.678 [759/766] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:07.678 [760/766] Linking target app/dpdk-test-regex 00:03:07.678 [761/766] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:07.678 [762/766] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:07.938 [763/766] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:08.198 [764/766] Linking target app/dpdk-test-security-perf 00:03:08.198 [765/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.768 [766/766] Linking target app/dpdk-testpmd 00:03:08.768 22:47:47 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:08.768 22:47:47 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.768 22:47:47 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:08.768 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.029 [0/1] Installing files. 00:03:09.293 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:09.293 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.298 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.298 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.299 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.872 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:09.872 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.873 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.874 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.875 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.875 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:09.876 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:09.876 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:09.876 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.876 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:09.876 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:09.876 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:09.876 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.876 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:09.876 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.876 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:09.876 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.876 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:09.876 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.876 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:09.876 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.876 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:09.876 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.876 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:09.876 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.876 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:09.876 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.876 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:09.876 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.876 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:09.876 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.876 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:09.876 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.876 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:09.876 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.876 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:09.876 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.876 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:09.876 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.876 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:09.876 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.876 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:09.876 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.876 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:09.876 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.876 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:09.876 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.876 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:09.876 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.876 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:09.876 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.876 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:09.876 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.876 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:09.876 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.876 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:09.876 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.876 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:09.876 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.876 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:09.876 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.876 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:09.876 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:09.876 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:09.876 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.876 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:09.876 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.876 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:09.876 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.876 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:09.876 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.876 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:09.876 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.876 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:09.876 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.876 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:09.876 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.876 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:09.876 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.876 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:09.876 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.876 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:09.876 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.876 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:09.876 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.876 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:09.876 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.876 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:09.876 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:09.876 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:09.876 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.876 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:09.876 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.876 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:09.876 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.877 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:09.877 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.877 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:09.877 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.877 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:09.877 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.877 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:09.877 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.877 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:09.877 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:09.877 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:09.877 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.877 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:09.877 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.877 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:09.877 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.877 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:09.877 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.877 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:09.877 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.877 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:09.877 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.877 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:09.877 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.877 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:09.877 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:09.877 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:09.877 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:09.877 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:09.877 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:09.877 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:09.877 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:09.877 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:09.877 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:09.877 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:09.877 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:09.877 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:09.877 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:09.877 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:09.877 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:09.877 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:09.877 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:09.877 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:09.877 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:09.877 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:09.877 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:09.877 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:09.877 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:09.877 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:09.877 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:09.877 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:09.877 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:09.877 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:09.877 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:09.877 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:09.877 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:09.877 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:09.877 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:09.877 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:09.877 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:09.877 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:09.877 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:09.877 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:09.877 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:09.877 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:09.877 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:09.877 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:09.877 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:09.877 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:09.877 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:09.877 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:09.877 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:09.877 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:09.877 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:09.877 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:09.877 22:47:48 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:09.877 ************************************ 00:03:09.877 END TEST build_native_dpdk 00:03:09.877 ************************************ 00:03:09.877 22:47:48 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.877 00:03:09.877 real 0m48.576s 00:03:09.877 user 5m24.564s 00:03:09.877 sys 1m0.076s 00:03:09.877 22:47:48 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.877 22:47:48 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.877 22:47:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.877 22:47:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.877 22:47:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:10.138 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.138 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.138 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:10.399 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:10.659 Using 'verbs' RDMA provider 00:03:26.560 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:44.696 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:44.696 Creating mk/config.mk...done. 00:03:44.696 Creating mk/cc.flags.mk...done. 00:03:44.696 Type 'make' to build. 00:03:44.696 22:48:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:44.696 22:48:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:44.696 22:48:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:44.696 22:48:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.696 ************************************ 00:03:44.696 START TEST make 00:03:44.696 ************************************ 00:03:44.696 22:48:22 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:44.696 make[1]: Nothing to be done for 'all'. 00:04:31.409 CC lib/log/log.o 00:04:31.409 CC lib/log/log_flags.o 00:04:31.409 CC lib/log/log_deprecated.o 00:04:31.409 CC lib/ut_mock/mock.o 00:04:31.409 CC lib/ut/ut.o 00:04:31.409 LIB libspdk_log.a 00:04:31.409 LIB libspdk_ut_mock.a 00:04:31.409 SO libspdk_ut_mock.so.6.0 00:04:31.409 SO libspdk_log.so.7.1 00:04:31.409 LIB libspdk_ut.a 00:04:31.409 SO libspdk_ut.so.2.0 00:04:31.409 SYMLINK libspdk_log.so 00:04:31.409 SYMLINK libspdk_ut_mock.so 00:04:31.409 SYMLINK libspdk_ut.so 00:04:31.409 CXX lib/trace_parser/trace.o 00:04:31.409 CC lib/dma/dma.o 00:04:31.409 CC lib/util/base64.o 00:04:31.409 CC lib/util/bit_array.o 00:04:31.409 CC lib/util/crc16.o 00:04:31.409 CC lib/util/crc32.o 00:04:31.409 CC lib/util/cpuset.o 00:04:31.409 CC lib/util/crc32c.o 00:04:31.409 CC lib/ioat/ioat.o 00:04:31.409 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.409 CC lib/util/crc32_ieee.o 00:04:31.409 CC lib/util/crc64.o 00:04:31.409 CC lib/util/dif.o 00:04:31.409 CC lib/util/fd.o 00:04:31.409 LIB libspdk_dma.a 00:04:31.409 CC lib/util/fd_group.o 00:04:31.409 SO libspdk_dma.so.5.0 00:04:31.409 CC lib/util/file.o 00:04:31.409 CC lib/vfio_user/host/vfio_user.o 00:04:31.409 CC lib/util/hexlify.o 00:04:31.409 SYMLINK libspdk_dma.so 00:04:31.409 CC lib/util/iov.o 00:04:31.409 CC lib/util/math.o 00:04:31.409 LIB libspdk_ioat.a 00:04:31.409 SO libspdk_ioat.so.7.0 00:04:31.409 CC lib/util/net.o 00:04:31.409 CC lib/util/pipe.o 00:04:31.409 SYMLINK libspdk_ioat.so 00:04:31.409 CC lib/util/strerror_tls.o 00:04:31.409 CC lib/util/string.o 00:04:31.409 CC lib/util/uuid.o 00:04:31.410 LIB libspdk_vfio_user.a 00:04:31.410 CC lib/util/xor.o 00:04:31.410 SO libspdk_vfio_user.so.5.0 00:04:31.410 CC lib/util/zipf.o 00:04:31.410 CC lib/util/md5.o 00:04:31.410 SYMLINK libspdk_vfio_user.so 00:04:31.410 LIB libspdk_util.a 00:04:31.410 LIB libspdk_trace_parser.a 00:04:31.410 SO libspdk_util.so.10.1 00:04:31.410 SO libspdk_trace_parser.so.6.0 00:04:31.410 SYMLINK libspdk_util.so 00:04:31.410 SYMLINK libspdk_trace_parser.so 00:04:31.410 CC lib/idxd/idxd.o 00:04:31.410 CC lib/idxd/idxd_user.o 00:04:31.410 CC lib/idxd/idxd_kernel.o 00:04:31.410 CC lib/vmd/vmd.o 00:04:31.410 CC lib/vmd/led.o 00:04:31.410 CC lib/conf/conf.o 00:04:31.410 CC lib/rdma_utils/rdma_utils.o 00:04:31.410 CC lib/json/json_parse.o 00:04:31.410 CC lib/json/json_util.o 00:04:31.410 CC lib/env_dpdk/env.o 00:04:31.410 CC lib/env_dpdk/memory.o 00:04:31.410 CC lib/env_dpdk/pci.o 00:04:31.410 LIB libspdk_conf.a 00:04:31.410 SO libspdk_conf.so.6.0 00:04:31.410 CC lib/env_dpdk/init.o 00:04:31.410 CC lib/env_dpdk/threads.o 00:04:31.410 CC lib/json/json_write.o 00:04:31.410 SYMLINK libspdk_conf.so 00:04:31.410 CC lib/env_dpdk/pci_ioat.o 00:04:31.410 LIB libspdk_rdma_utils.a 00:04:31.410 SO libspdk_rdma_utils.so.1.0 00:04:31.410 SYMLINK libspdk_rdma_utils.so 00:04:31.410 CC lib/env_dpdk/pci_virtio.o 00:04:31.410 CC lib/env_dpdk/pci_vmd.o 00:04:31.410 CC lib/env_dpdk/pci_idxd.o 00:04:31.410 CC lib/env_dpdk/pci_event.o 00:04:31.410 CC lib/env_dpdk/sigbus_handler.o 00:04:31.410 CC lib/env_dpdk/pci_dpdk.o 00:04:31.410 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.410 LIB libspdk_json.a 00:04:31.410 SO libspdk_json.so.6.0 00:04:31.410 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.410 SYMLINK libspdk_json.so 00:04:31.410 LIB libspdk_idxd.a 00:04:31.410 LIB libspdk_vmd.a 00:04:31.410 SO libspdk_idxd.so.12.1 00:04:31.410 SO libspdk_vmd.so.6.0 00:04:31.410 SYMLINK libspdk_idxd.so 00:04:31.410 SYMLINK libspdk_vmd.so 00:04:31.410 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:31.410 CC lib/rdma_provider/common.o 00:04:31.410 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.410 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.410 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.410 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.410 LIB libspdk_rdma_provider.a 00:04:31.410 SO libspdk_rdma_provider.so.7.0 00:04:31.410 SYMLINK libspdk_rdma_provider.so 00:04:31.410 LIB libspdk_jsonrpc.a 00:04:31.410 SO libspdk_jsonrpc.so.6.0 00:04:31.410 SYMLINK libspdk_jsonrpc.so 00:04:31.410 LIB libspdk_env_dpdk.a 00:04:31.410 CC lib/rpc/rpc.o 00:04:31.410 SO libspdk_env_dpdk.so.15.1 00:04:31.410 LIB libspdk_rpc.a 00:04:31.410 SYMLINK libspdk_env_dpdk.so 00:04:31.410 SO libspdk_rpc.so.6.0 00:04:31.410 SYMLINK libspdk_rpc.so 00:04:31.410 CC lib/keyring/keyring_rpc.o 00:04:31.410 CC lib/keyring/keyring.o 00:04:31.410 CC lib/notify/notify_rpc.o 00:04:31.410 CC lib/notify/notify.o 00:04:31.410 CC lib/trace/trace.o 00:04:31.410 CC lib/trace/trace_flags.o 00:04:31.410 CC lib/trace/trace_rpc.o 00:04:31.410 LIB libspdk_notify.a 00:04:31.410 SO libspdk_notify.so.6.0 00:04:31.410 LIB libspdk_keyring.a 00:04:31.410 SO libspdk_keyring.so.2.0 00:04:31.410 LIB libspdk_trace.a 00:04:31.410 SYMLINK libspdk_notify.so 00:04:31.410 SO libspdk_trace.so.11.0 00:04:31.410 SYMLINK libspdk_keyring.so 00:04:31.410 SYMLINK libspdk_trace.so 00:04:31.410 CC lib/thread/thread.o 00:04:31.410 CC lib/thread/iobuf.o 00:04:31.410 CC lib/sock/sock.o 00:04:31.410 CC lib/sock/sock_rpc.o 00:04:31.669 LIB libspdk_sock.a 00:04:31.669 SO libspdk_sock.so.10.0 00:04:31.669 SYMLINK libspdk_sock.so 00:04:32.240 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:32.240 CC lib/nvme/nvme_ctrlr.o 00:04:32.240 CC lib/nvme/nvme_fabric.o 00:04:32.240 CC lib/nvme/nvme_pcie_common.o 00:04:32.240 CC lib/nvme/nvme_ns_cmd.o 00:04:32.240 CC lib/nvme/nvme_ns.o 00:04:32.240 CC lib/nvme/nvme_pcie.o 00:04:32.240 CC lib/nvme/nvme_qpair.o 00:04:32.240 CC lib/nvme/nvme.o 00:04:32.500 LIB libspdk_thread.a 00:04:32.760 SO libspdk_thread.so.11.0 00:04:32.760 SYMLINK libspdk_thread.so 00:04:32.760 CC lib/nvme/nvme_quirks.o 00:04:32.760 CC lib/nvme/nvme_transport.o 00:04:32.760 CC lib/nvme/nvme_discovery.o 00:04:32.760 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:32.760 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:33.020 CC lib/nvme/nvme_tcp.o 00:04:33.020 CC lib/accel/accel.o 00:04:33.020 CC lib/blob/blobstore.o 00:04:33.280 CC lib/accel/accel_rpc.o 00:04:33.280 CC lib/blob/request.o 00:04:33.280 CC lib/init/json_config.o 00:04:33.280 CC lib/blob/zeroes.o 00:04:33.280 CC lib/blob/blob_bs_dev.o 00:04:33.280 CC lib/virtio/virtio.o 00:04:33.540 CC lib/virtio/virtio_vhost_user.o 00:04:33.541 CC lib/virtio/virtio_vfio_user.o 00:04:33.541 CC lib/init/subsystem.o 00:04:33.541 CC lib/nvme/nvme_opal.o 00:04:33.541 CC lib/nvme/nvme_io_msg.o 00:04:33.541 CC lib/init/subsystem_rpc.o 00:04:33.541 CC lib/virtio/virtio_pci.o 00:04:33.541 CC lib/accel/accel_sw.o 00:04:33.801 CC lib/init/rpc.o 00:04:33.801 CC lib/fsdev/fsdev.o 00:04:33.801 LIB libspdk_init.a 00:04:33.801 LIB libspdk_virtio.a 00:04:33.801 SO libspdk_init.so.6.0 00:04:34.061 SO libspdk_virtio.so.7.0 00:04:34.061 CC lib/fsdev/fsdev_io.o 00:04:34.061 SYMLINK libspdk_init.so 00:04:34.061 CC lib/fsdev/fsdev_rpc.o 00:04:34.061 SYMLINK libspdk_virtio.so 00:04:34.061 CC lib/nvme/nvme_poll_group.o 00:04:34.061 CC lib/nvme/nvme_zns.o 00:04:34.061 CC lib/nvme/nvme_stubs.o 00:04:34.321 LIB libspdk_accel.a 00:04:34.321 CC lib/event/app.o 00:04:34.321 SO libspdk_accel.so.16.0 00:04:34.321 SYMLINK libspdk_accel.so 00:04:34.321 CC lib/event/reactor.o 00:04:34.321 CC lib/event/log_rpc.o 00:04:34.321 CC lib/event/app_rpc.o 00:04:34.581 CC lib/event/scheduler_static.o 00:04:34.581 LIB libspdk_fsdev.a 00:04:34.581 SO libspdk_fsdev.so.2.0 00:04:34.581 CC lib/nvme/nvme_auth.o 00:04:34.581 SYMLINK libspdk_fsdev.so 00:04:34.581 CC lib/nvme/nvme_cuse.o 00:04:34.581 CC lib/nvme/nvme_rdma.o 00:04:34.841 CC lib/bdev/bdev.o 00:04:34.841 CC lib/bdev/bdev_rpc.o 00:04:34.841 CC lib/bdev/part.o 00:04:34.841 CC lib/bdev/bdev_zone.o 00:04:34.841 LIB libspdk_event.a 00:04:34.841 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:34.841 SO libspdk_event.so.14.0 00:04:34.841 SYMLINK libspdk_event.so 00:04:34.841 CC lib/bdev/scsi_nvme.o 00:04:35.411 LIB libspdk_fuse_dispatcher.a 00:04:35.411 SO libspdk_fuse_dispatcher.so.1.0 00:04:35.411 SYMLINK libspdk_fuse_dispatcher.so 00:04:35.982 LIB libspdk_nvme.a 00:04:36.242 SO libspdk_nvme.so.15.0 00:04:36.242 LIB libspdk_blob.a 00:04:36.502 SYMLINK libspdk_nvme.so 00:04:36.502 SO libspdk_blob.so.12.0 00:04:36.502 SYMLINK libspdk_blob.so 00:04:37.073 CC lib/lvol/lvol.o 00:04:37.073 CC lib/blobfs/blobfs.o 00:04:37.073 CC lib/blobfs/tree.o 00:04:37.333 LIB libspdk_bdev.a 00:04:37.593 SO libspdk_bdev.so.17.0 00:04:37.593 SYMLINK libspdk_bdev.so 00:04:37.593 LIB libspdk_blobfs.a 00:04:37.854 SO libspdk_blobfs.so.11.0 00:04:37.854 SYMLINK libspdk_blobfs.so 00:04:37.854 LIB libspdk_lvol.a 00:04:37.854 SO libspdk_lvol.so.11.0 00:04:37.854 CC lib/scsi/lun.o 00:04:37.854 CC lib/scsi/port.o 00:04:37.854 CC lib/scsi/dev.o 00:04:37.854 CC lib/scsi/scsi.o 00:04:37.854 CC lib/scsi/scsi_bdev.o 00:04:37.854 CC lib/ftl/ftl_core.o 00:04:37.854 CC lib/nvmf/ctrlr.o 00:04:37.854 CC lib/nbd/nbd.o 00:04:37.854 CC lib/ublk/ublk.o 00:04:37.854 SYMLINK libspdk_lvol.so 00:04:37.854 CC lib/ublk/ublk_rpc.o 00:04:38.114 CC lib/scsi/scsi_pr.o 00:04:38.114 CC lib/ftl/ftl_init.o 00:04:38.114 CC lib/ftl/ftl_layout.o 00:04:38.114 CC lib/scsi/scsi_rpc.o 00:04:38.114 CC lib/scsi/task.o 00:04:38.114 CC lib/nbd/nbd_rpc.o 00:04:38.114 CC lib/ftl/ftl_debug.o 00:04:38.114 CC lib/nvmf/ctrlr_discovery.o 00:04:38.374 CC lib/nvmf/ctrlr_bdev.o 00:04:38.374 CC lib/nvmf/subsystem.o 00:04:38.374 CC lib/nvmf/nvmf.o 00:04:38.374 LIB libspdk_scsi.a 00:04:38.374 LIB libspdk_nbd.a 00:04:38.374 CC lib/ftl/ftl_io.o 00:04:38.374 SO libspdk_nbd.so.7.0 00:04:38.374 SO libspdk_scsi.so.9.0 00:04:38.374 SYMLINK libspdk_nbd.so 00:04:38.374 CC lib/nvmf/nvmf_rpc.o 00:04:38.374 CC lib/nvmf/transport.o 00:04:38.374 SYMLINK libspdk_scsi.so 00:04:38.374 CC lib/nvmf/tcp.o 00:04:38.374 LIB libspdk_ublk.a 00:04:38.633 SO libspdk_ublk.so.3.0 00:04:38.633 SYMLINK libspdk_ublk.so 00:04:38.633 CC lib/ftl/ftl_sb.o 00:04:38.633 CC lib/iscsi/conn.o 00:04:38.893 CC lib/ftl/ftl_l2p.o 00:04:38.893 CC lib/vhost/vhost.o 00:04:38.893 CC lib/nvmf/stubs.o 00:04:39.156 CC lib/ftl/ftl_l2p_flat.o 00:04:39.156 CC lib/nvmf/mdns_server.o 00:04:39.156 CC lib/ftl/ftl_nv_cache.o 00:04:39.156 CC lib/nvmf/rdma.o 00:04:39.487 CC lib/iscsi/init_grp.o 00:04:39.487 CC lib/iscsi/iscsi.o 00:04:39.487 CC lib/iscsi/param.o 00:04:39.487 CC lib/iscsi/portal_grp.o 00:04:39.487 CC lib/nvmf/auth.o 00:04:39.787 CC lib/iscsi/tgt_node.o 00:04:39.787 CC lib/iscsi/iscsi_subsystem.o 00:04:39.787 CC lib/vhost/vhost_rpc.o 00:04:39.787 CC lib/iscsi/iscsi_rpc.o 00:04:39.787 CC lib/iscsi/task.o 00:04:40.048 CC lib/vhost/vhost_scsi.o 00:04:40.048 CC lib/vhost/vhost_blk.o 00:04:40.316 CC lib/vhost/rte_vhost_user.o 00:04:40.316 CC lib/ftl/ftl_band.o 00:04:40.316 CC lib/ftl/ftl_band_ops.o 00:04:40.316 CC lib/ftl/ftl_writer.o 00:04:40.316 CC lib/ftl/ftl_rq.o 00:04:40.578 CC lib/ftl/ftl_reloc.o 00:04:40.578 CC lib/ftl/ftl_l2p_cache.o 00:04:40.578 CC lib/ftl/ftl_p2l.o 00:04:40.578 CC lib/ftl/ftl_p2l_log.o 00:04:40.578 CC lib/ftl/mngt/ftl_mngt.o 00:04:40.838 LIB libspdk_iscsi.a 00:04:40.838 SO libspdk_iscsi.so.8.0 00:04:40.838 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:40.838 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.098 SYMLINK libspdk_iscsi.so 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:41.098 LIB libspdk_vhost.a 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:41.098 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:41.098 SO libspdk_vhost.so.8.0 00:04:41.098 CC lib/ftl/utils/ftl_conf.o 00:04:41.358 CC lib/ftl/utils/ftl_md.o 00:04:41.358 CC lib/ftl/utils/ftl_mempool.o 00:04:41.358 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.358 SYMLINK libspdk_vhost.so 00:04:41.358 CC lib/ftl/utils/ftl_property.o 00:04:41.358 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:41.358 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:41.358 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:41.358 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:41.359 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:41.619 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:41.619 LIB libspdk_nvmf.a 00:04:41.619 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:41.619 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:41.619 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:41.619 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:41.619 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:41.619 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:41.619 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:41.619 CC lib/ftl/base/ftl_base_dev.o 00:04:41.619 SO libspdk_nvmf.so.20.0 00:04:41.879 CC lib/ftl/base/ftl_base_bdev.o 00:04:41.879 CC lib/ftl/ftl_trace.o 00:04:42.138 SYMLINK libspdk_nvmf.so 00:04:42.138 LIB libspdk_ftl.a 00:04:42.398 SO libspdk_ftl.so.9.0 00:04:42.657 SYMLINK libspdk_ftl.so 00:04:42.916 CC module/env_dpdk/env_dpdk_rpc.o 00:04:42.916 CC module/accel/error/accel_error.o 00:04:42.916 CC module/sock/posix/posix.o 00:04:42.916 CC module/blob/bdev/blob_bdev.o 00:04:42.916 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:42.916 CC module/keyring/file/keyring.o 00:04:42.916 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:42.916 CC module/accel/ioat/accel_ioat.o 00:04:42.916 CC module/accel/dsa/accel_dsa.o 00:04:42.916 CC module/fsdev/aio/fsdev_aio.o 00:04:43.176 LIB libspdk_env_dpdk_rpc.a 00:04:43.176 SO libspdk_env_dpdk_rpc.so.6.0 00:04:43.176 CC module/keyring/file/keyring_rpc.o 00:04:43.176 SYMLINK libspdk_env_dpdk_rpc.so 00:04:43.176 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:43.176 CC module/accel/ioat/accel_ioat_rpc.o 00:04:43.176 LIB libspdk_scheduler_dpdk_governor.a 00:04:43.176 LIB libspdk_scheduler_dynamic.a 00:04:43.176 CC module/accel/error/accel_error_rpc.o 00:04:43.176 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:43.176 SO libspdk_scheduler_dynamic.so.4.0 00:04:43.176 LIB libspdk_keyring_file.a 00:04:43.176 LIB libspdk_blob_bdev.a 00:04:43.435 CC module/fsdev/aio/linux_aio_mgr.o 00:04:43.435 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:43.435 SO libspdk_keyring_file.so.2.0 00:04:43.435 CC module/accel/dsa/accel_dsa_rpc.o 00:04:43.435 SYMLINK libspdk_scheduler_dynamic.so 00:04:43.435 LIB libspdk_accel_ioat.a 00:04:43.435 SO libspdk_blob_bdev.so.12.0 00:04:43.435 SO libspdk_accel_ioat.so.6.0 00:04:43.435 SYMLINK libspdk_keyring_file.so 00:04:43.435 LIB libspdk_accel_error.a 00:04:43.435 SYMLINK libspdk_blob_bdev.so 00:04:43.435 SO libspdk_accel_error.so.2.0 00:04:43.435 SYMLINK libspdk_accel_ioat.so 00:04:43.435 LIB libspdk_accel_dsa.a 00:04:43.435 SYMLINK libspdk_accel_error.so 00:04:43.435 SO libspdk_accel_dsa.so.5.0 00:04:43.435 CC module/scheduler/gscheduler/gscheduler.o 00:04:43.435 CC module/keyring/linux/keyring.o 00:04:43.435 CC module/accel/iaa/accel_iaa.o 00:04:43.435 SYMLINK libspdk_accel_dsa.so 00:04:43.435 CC module/accel/iaa/accel_iaa_rpc.o 00:04:43.695 CC module/bdev/delay/vbdev_delay.o 00:04:43.695 CC module/bdev/error/vbdev_error.o 00:04:43.695 LIB libspdk_scheduler_gscheduler.a 00:04:43.695 CC module/keyring/linux/keyring_rpc.o 00:04:43.695 SO libspdk_scheduler_gscheduler.so.4.0 00:04:43.695 CC module/bdev/gpt/gpt.o 00:04:43.695 CC module/bdev/gpt/vbdev_gpt.o 00:04:43.695 CC module/blobfs/bdev/blobfs_bdev.o 00:04:43.695 SYMLINK libspdk_scheduler_gscheduler.so 00:04:43.695 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:43.695 LIB libspdk_fsdev_aio.a 00:04:43.695 LIB libspdk_accel_iaa.a 00:04:43.695 SO libspdk_accel_iaa.so.3.0 00:04:43.695 SO libspdk_fsdev_aio.so.1.0 00:04:43.695 LIB libspdk_keyring_linux.a 00:04:43.695 LIB libspdk_sock_posix.a 00:04:43.695 SO libspdk_keyring_linux.so.1.0 00:04:43.954 SYMLINK libspdk_accel_iaa.so 00:04:43.954 SO libspdk_sock_posix.so.6.0 00:04:43.954 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:43.954 SYMLINK libspdk_fsdev_aio.so 00:04:43.954 CC module/bdev/error/vbdev_error_rpc.o 00:04:43.954 SYMLINK libspdk_keyring_linux.so 00:04:43.954 LIB libspdk_blobfs_bdev.a 00:04:43.954 SYMLINK libspdk_sock_posix.so 00:04:43.954 SO libspdk_blobfs_bdev.so.6.0 00:04:43.954 LIB libspdk_bdev_gpt.a 00:04:43.954 SO libspdk_bdev_gpt.so.6.0 00:04:43.954 LIB libspdk_bdev_error.a 00:04:43.954 SYMLINK libspdk_blobfs_bdev.so 00:04:43.954 CC module/bdev/lvol/vbdev_lvol.o 00:04:43.954 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:43.954 LIB libspdk_bdev_delay.a 00:04:43.954 CC module/bdev/malloc/bdev_malloc.o 00:04:43.954 SO libspdk_bdev_error.so.6.0 00:04:43.954 CC module/bdev/null/bdev_null.o 00:04:43.954 CC module/bdev/nvme/bdev_nvme.o 00:04:43.954 SO libspdk_bdev_delay.so.6.0 00:04:43.954 SYMLINK libspdk_bdev_gpt.so 00:04:43.954 CC module/bdev/passthru/vbdev_passthru.o 00:04:43.954 SYMLINK libspdk_bdev_error.so 00:04:43.954 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:44.213 SYMLINK libspdk_bdev_delay.so 00:04:44.213 CC module/bdev/raid/bdev_raid.o 00:04:44.213 CC module/bdev/raid/bdev_raid_rpc.o 00:04:44.213 CC module/bdev/split/vbdev_split.o 00:04:44.213 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:44.213 CC module/bdev/null/bdev_null_rpc.o 00:04:44.472 LIB libspdk_bdev_passthru.a 00:04:44.472 SO libspdk_bdev_passthru.so.6.0 00:04:44.472 CC module/bdev/raid/bdev_raid_sb.o 00:04:44.472 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:44.472 SYMLINK libspdk_bdev_passthru.so 00:04:44.472 CC module/bdev/split/vbdev_split_rpc.o 00:04:44.472 CC module/bdev/raid/raid0.o 00:04:44.472 LIB libspdk_bdev_null.a 00:04:44.472 CC module/bdev/raid/raid1.o 00:04:44.472 SO libspdk_bdev_null.so.6.0 00:04:44.472 LIB libspdk_bdev_lvol.a 00:04:44.472 SYMLINK libspdk_bdev_null.so 00:04:44.472 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:44.472 SO libspdk_bdev_lvol.so.6.0 00:04:44.472 LIB libspdk_bdev_malloc.a 00:04:44.472 LIB libspdk_bdev_split.a 00:04:44.472 SO libspdk_bdev_malloc.so.6.0 00:04:44.731 SO libspdk_bdev_split.so.6.0 00:04:44.731 SYMLINK libspdk_bdev_lvol.so 00:04:44.731 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:44.731 SYMLINK libspdk_bdev_malloc.so 00:04:44.731 CC module/bdev/raid/concat.o 00:04:44.731 SYMLINK libspdk_bdev_split.so 00:04:44.731 CC module/bdev/nvme/nvme_rpc.o 00:04:44.731 CC module/bdev/raid/raid5f.o 00:04:44.731 LIB libspdk_bdev_zone_block.a 00:04:44.731 SO libspdk_bdev_zone_block.so.6.0 00:04:44.731 CC module/bdev/aio/bdev_aio.o 00:04:44.731 CC module/bdev/ftl/bdev_ftl.o 00:04:44.731 SYMLINK libspdk_bdev_zone_block.so 00:04:44.731 CC module/bdev/nvme/bdev_mdns_client.o 00:04:44.731 CC module/bdev/aio/bdev_aio_rpc.o 00:04:44.990 CC module/bdev/iscsi/bdev_iscsi.o 00:04:44.990 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:44.990 CC module/bdev/nvme/vbdev_opal.o 00:04:44.990 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:44.990 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:44.990 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.249 LIB libspdk_bdev_aio.a 00:04:45.249 SO libspdk_bdev_aio.so.6.0 00:04:45.249 SYMLINK libspdk_bdev_aio.so 00:04:45.249 LIB libspdk_bdev_iscsi.a 00:04:45.249 SO libspdk_bdev_iscsi.so.6.0 00:04:45.249 LIB libspdk_bdev_raid.a 00:04:45.249 LIB libspdk_bdev_ftl.a 00:04:45.249 SYMLINK libspdk_bdev_iscsi.so 00:04:45.249 SO libspdk_bdev_ftl.so.6.0 00:04:45.508 SO libspdk_bdev_raid.so.6.0 00:04:45.508 SYMLINK libspdk_bdev_ftl.so 00:04:45.508 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:45.508 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:45.508 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.508 SYMLINK libspdk_bdev_raid.so 00:04:46.076 LIB libspdk_bdev_virtio.a 00:04:46.076 SO libspdk_bdev_virtio.so.6.0 00:04:46.076 SYMLINK libspdk_bdev_virtio.so 00:04:47.014 LIB libspdk_bdev_nvme.a 00:04:47.014 SO libspdk_bdev_nvme.so.7.1 00:04:47.014 SYMLINK libspdk_bdev_nvme.so 00:04:47.584 CC module/event/subsystems/fsdev/fsdev.o 00:04:47.584 CC module/event/subsystems/scheduler/scheduler.o 00:04:47.584 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:47.584 CC module/event/subsystems/keyring/keyring.o 00:04:47.584 CC module/event/subsystems/iobuf/iobuf.o 00:04:47.584 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:47.584 CC module/event/subsystems/vmd/vmd.o 00:04:47.584 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:47.584 CC module/event/subsystems/sock/sock.o 00:04:47.843 LIB libspdk_event_fsdev.a 00:04:47.843 LIB libspdk_event_scheduler.a 00:04:47.843 LIB libspdk_event_vhost_blk.a 00:04:47.843 LIB libspdk_event_keyring.a 00:04:47.843 LIB libspdk_event_vmd.a 00:04:47.843 LIB libspdk_event_sock.a 00:04:47.843 SO libspdk_event_fsdev.so.1.0 00:04:47.843 SO libspdk_event_scheduler.so.4.0 00:04:47.843 LIB libspdk_event_iobuf.a 00:04:47.843 SO libspdk_event_vhost_blk.so.3.0 00:04:47.843 SO libspdk_event_keyring.so.1.0 00:04:47.843 SO libspdk_event_vmd.so.6.0 00:04:47.843 SO libspdk_event_sock.so.5.0 00:04:47.843 SO libspdk_event_iobuf.so.3.0 00:04:47.843 SYMLINK libspdk_event_fsdev.so 00:04:47.843 SYMLINK libspdk_event_scheduler.so 00:04:47.843 SYMLINK libspdk_event_vhost_blk.so 00:04:47.843 SYMLINK libspdk_event_keyring.so 00:04:47.843 SYMLINK libspdk_event_sock.so 00:04:47.843 SYMLINK libspdk_event_vmd.so 00:04:47.843 SYMLINK libspdk_event_iobuf.so 00:04:48.412 CC module/event/subsystems/accel/accel.o 00:04:48.412 LIB libspdk_event_accel.a 00:04:48.412 SO libspdk_event_accel.so.6.0 00:04:48.671 SYMLINK libspdk_event_accel.so 00:04:48.931 CC module/event/subsystems/bdev/bdev.o 00:04:49.190 LIB libspdk_event_bdev.a 00:04:49.190 SO libspdk_event_bdev.so.6.0 00:04:49.190 SYMLINK libspdk_event_bdev.so 00:04:49.449 CC module/event/subsystems/ublk/ublk.o 00:04:49.449 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:49.449 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:49.449 CC module/event/subsystems/scsi/scsi.o 00:04:49.449 CC module/event/subsystems/nbd/nbd.o 00:04:49.709 LIB libspdk_event_ublk.a 00:04:49.709 LIB libspdk_event_scsi.a 00:04:49.709 SO libspdk_event_ublk.so.3.0 00:04:49.709 LIB libspdk_event_nbd.a 00:04:49.709 SO libspdk_event_scsi.so.6.0 00:04:49.709 SO libspdk_event_nbd.so.6.0 00:04:49.709 LIB libspdk_event_nvmf.a 00:04:49.709 SYMLINK libspdk_event_ublk.so 00:04:49.709 SYMLINK libspdk_event_scsi.so 00:04:49.709 SO libspdk_event_nvmf.so.6.0 00:04:49.709 SYMLINK libspdk_event_nbd.so 00:04:49.968 SYMLINK libspdk_event_nvmf.so 00:04:50.241 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.241 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:50.242 LIB libspdk_event_vhost_scsi.a 00:04:50.242 LIB libspdk_event_iscsi.a 00:04:50.242 SO libspdk_event_vhost_scsi.so.3.0 00:04:50.242 SO libspdk_event_iscsi.so.6.0 00:04:50.504 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.504 SYMLINK libspdk_event_iscsi.so 00:04:50.764 SO libspdk.so.6.0 00:04:50.764 SYMLINK libspdk.so 00:04:51.024 CC test/rpc_client/rpc_client_test.o 00:04:51.024 TEST_HEADER include/spdk/accel.h 00:04:51.024 TEST_HEADER include/spdk/accel_module.h 00:04:51.024 TEST_HEADER include/spdk/assert.h 00:04:51.024 CXX app/trace/trace.o 00:04:51.024 TEST_HEADER include/spdk/barrier.h 00:04:51.024 TEST_HEADER include/spdk/base64.h 00:04:51.024 TEST_HEADER include/spdk/bdev.h 00:04:51.024 TEST_HEADER include/spdk/bdev_module.h 00:04:51.024 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.024 TEST_HEADER include/spdk/bit_array.h 00:04:51.024 TEST_HEADER include/spdk/bit_pool.h 00:04:51.024 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.024 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.024 TEST_HEADER include/spdk/blobfs.h 00:04:51.024 TEST_HEADER include/spdk/blob.h 00:04:51.024 TEST_HEADER include/spdk/conf.h 00:04:51.024 TEST_HEADER include/spdk/config.h 00:04:51.024 TEST_HEADER include/spdk/cpuset.h 00:04:51.024 TEST_HEADER include/spdk/crc16.h 00:04:51.024 TEST_HEADER include/spdk/crc32.h 00:04:51.024 TEST_HEADER include/spdk/crc64.h 00:04:51.024 TEST_HEADER include/spdk/dif.h 00:04:51.024 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:51.024 TEST_HEADER include/spdk/dma.h 00:04:51.024 TEST_HEADER include/spdk/endian.h 00:04:51.024 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.024 TEST_HEADER include/spdk/env.h 00:04:51.024 TEST_HEADER include/spdk/event.h 00:04:51.024 TEST_HEADER include/spdk/fd_group.h 00:04:51.024 TEST_HEADER include/spdk/fd.h 00:04:51.024 TEST_HEADER include/spdk/file.h 00:04:51.024 TEST_HEADER include/spdk/fsdev.h 00:04:51.024 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.024 TEST_HEADER include/spdk/ftl.h 00:04:51.024 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.024 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.024 TEST_HEADER include/spdk/hexlify.h 00:04:51.024 TEST_HEADER include/spdk/histogram_data.h 00:04:51.024 TEST_HEADER include/spdk/idxd.h 00:04:51.024 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.024 TEST_HEADER include/spdk/init.h 00:04:51.024 TEST_HEADER include/spdk/ioat.h 00:04:51.024 TEST_HEADER include/spdk/ioat_spec.h 00:04:51.024 TEST_HEADER include/spdk/iscsi_spec.h 00:04:51.024 TEST_HEADER include/spdk/json.h 00:04:51.024 CC examples/util/zipf/zipf.o 00:04:51.024 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.024 CC test/thread/poller_perf/poller_perf.o 00:04:51.024 CC examples/ioat/perf/perf.o 00:04:51.024 TEST_HEADER include/spdk/keyring.h 00:04:51.024 TEST_HEADER include/spdk/keyring_module.h 00:04:51.024 TEST_HEADER include/spdk/likely.h 00:04:51.024 TEST_HEADER include/spdk/log.h 00:04:51.024 TEST_HEADER include/spdk/lvol.h 00:04:51.024 TEST_HEADER include/spdk/md5.h 00:04:51.024 TEST_HEADER include/spdk/memory.h 00:04:51.024 TEST_HEADER include/spdk/mmio.h 00:04:51.024 TEST_HEADER include/spdk/nbd.h 00:04:51.024 TEST_HEADER include/spdk/net.h 00:04:51.024 TEST_HEADER include/spdk/notify.h 00:04:51.024 TEST_HEADER include/spdk/nvme.h 00:04:51.024 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.024 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.024 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.024 CC test/dma/test_dma/test_dma.o 00:04:51.024 CC test/app/bdev_svc/bdev_svc.o 00:04:51.024 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.024 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.024 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.024 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.024 TEST_HEADER include/spdk/nvmf.h 00:04:51.024 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.024 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.024 TEST_HEADER include/spdk/opal.h 00:04:51.024 TEST_HEADER include/spdk/opal_spec.h 00:04:51.024 TEST_HEADER include/spdk/pci_ids.h 00:04:51.024 TEST_HEADER include/spdk/pipe.h 00:04:51.024 TEST_HEADER include/spdk/queue.h 00:04:51.024 TEST_HEADER include/spdk/reduce.h 00:04:51.024 TEST_HEADER include/spdk/rpc.h 00:04:51.024 TEST_HEADER include/spdk/scheduler.h 00:04:51.024 TEST_HEADER include/spdk/scsi.h 00:04:51.024 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.024 LINK rpc_client_test 00:04:51.024 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.024 TEST_HEADER include/spdk/sock.h 00:04:51.285 TEST_HEADER include/spdk/stdinc.h 00:04:51.285 TEST_HEADER include/spdk/string.h 00:04:51.285 TEST_HEADER include/spdk/thread.h 00:04:51.285 TEST_HEADER include/spdk/trace.h 00:04:51.285 TEST_HEADER include/spdk/trace_parser.h 00:04:51.285 TEST_HEADER include/spdk/tree.h 00:04:51.285 TEST_HEADER include/spdk/ublk.h 00:04:51.285 TEST_HEADER include/spdk/util.h 00:04:51.285 TEST_HEADER include/spdk/uuid.h 00:04:51.285 TEST_HEADER include/spdk/version.h 00:04:51.285 LINK poller_perf 00:04:51.285 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.285 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.285 LINK zipf 00:04:51.285 LINK interrupt_tgt 00:04:51.285 TEST_HEADER include/spdk/vhost.h 00:04:51.285 TEST_HEADER include/spdk/vmd.h 00:04:51.285 TEST_HEADER include/spdk/xor.h 00:04:51.285 TEST_HEADER include/spdk/zipf.h 00:04:51.285 CXX test/cpp_headers/accel.o 00:04:51.285 LINK bdev_svc 00:04:51.285 LINK ioat_perf 00:04:51.285 CXX test/cpp_headers/accel_module.o 00:04:51.285 LINK spdk_trace 00:04:51.549 CC app/trace_record/trace_record.o 00:04:51.549 CXX test/cpp_headers/assert.o 00:04:51.549 CC app/nvmf_tgt/nvmf_main.o 00:04:51.549 CXX test/cpp_headers/barrier.o 00:04:51.549 CC app/iscsi_tgt/iscsi_tgt.o 00:04:51.549 CC examples/ioat/verify/verify.o 00:04:51.549 CC app/spdk_tgt/spdk_tgt.o 00:04:51.549 LINK test_dma 00:04:51.549 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.549 CXX test/cpp_headers/base64.o 00:04:51.549 LINK nvmf_tgt 00:04:51.549 LINK mem_callbacks 00:04:51.809 LINK spdk_trace_record 00:04:51.809 LINK iscsi_tgt 00:04:51.809 LINK verify 00:04:51.809 LINK spdk_tgt 00:04:51.809 CXX test/cpp_headers/bdev.o 00:04:51.809 CC examples/thread/thread/thread_ex.o 00:04:51.809 CC test/env/vtophys/vtophys.o 00:04:51.809 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.069 CC examples/sock/hello_world/hello_sock.o 00:04:52.069 CC test/env/memory/memory_ut.o 00:04:52.069 CXX test/cpp_headers/bdev_module.o 00:04:52.069 CC test/env/pci/pci_ut.o 00:04:52.069 CC app/spdk_lspci/spdk_lspci.o 00:04:52.069 CC test/app/histogram_perf/histogram_perf.o 00:04:52.069 LINK nvme_fuzz 00:04:52.069 LINK vtophys 00:04:52.069 LINK thread 00:04:52.069 LINK env_dpdk_post_init 00:04:52.069 LINK spdk_lspci 00:04:52.069 CXX test/cpp_headers/bdev_zone.o 00:04:52.329 LINK histogram_perf 00:04:52.329 LINK hello_sock 00:04:52.329 CXX test/cpp_headers/bit_array.o 00:04:52.329 CXX test/cpp_headers/bit_pool.o 00:04:52.329 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.329 LINK pci_ut 00:04:52.329 CC app/spdk_nvme_perf/perf.o 00:04:52.329 CXX test/cpp_headers/blob_bdev.o 00:04:52.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.589 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.589 CC app/spdk_nvme_identify/identify.o 00:04:52.589 CC app/spdk_nvme_discover/discovery_aer.o 00:04:52.589 CC app/spdk_top/spdk_top.o 00:04:52.589 LINK lsvmd 00:04:52.589 CXX test/cpp_headers/blobfs_bdev.o 00:04:52.589 CXX test/cpp_headers/blobfs.o 00:04:52.589 LINK spdk_nvme_discover 00:04:52.849 CXX test/cpp_headers/blob.o 00:04:52.849 CC examples/vmd/led/led.o 00:04:52.849 CC app/vhost/vhost.o 00:04:52.849 LINK vhost_fuzz 00:04:52.849 CC test/app/jsoncat/jsoncat.o 00:04:52.849 CXX test/cpp_headers/conf.o 00:04:53.109 LINK led 00:04:53.109 LINK vhost 00:04:53.109 LINK jsoncat 00:04:53.109 CXX test/cpp_headers/config.o 00:04:53.109 LINK memory_ut 00:04:53.109 CXX test/cpp_headers/cpuset.o 00:04:53.370 CC test/app/stub/stub.o 00:04:53.370 CXX test/cpp_headers/crc16.o 00:04:53.370 CC examples/idxd/perf/perf.o 00:04:53.370 CC app/spdk_dd/spdk_dd.o 00:04:53.370 LINK spdk_nvme_perf 00:04:53.370 LINK stub 00:04:53.370 LINK spdk_nvme_identify 00:04:53.629 CXX test/cpp_headers/crc32.o 00:04:53.629 CC app/fio/nvme/fio_plugin.o 00:04:53.629 LINK spdk_top 00:04:53.629 CC app/fio/bdev/fio_plugin.o 00:04:53.629 CXX test/cpp_headers/crc64.o 00:04:53.629 CXX test/cpp_headers/dif.o 00:04:53.629 CXX test/cpp_headers/dma.o 00:04:53.629 CXX test/cpp_headers/endian.o 00:04:53.629 LINK idxd_perf 00:04:53.888 LINK spdk_dd 00:04:53.888 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:53.888 CXX test/cpp_headers/env_dpdk.o 00:04:53.888 CC examples/accel/perf/accel_perf.o 00:04:53.888 CC examples/nvme/hello_world/hello_world.o 00:04:53.888 CC examples/blob/hello_world/hello_blob.o 00:04:53.888 CC examples/nvme/reconnect/reconnect.o 00:04:54.147 CXX test/cpp_headers/env.o 00:04:54.147 LINK spdk_bdev 00:04:54.147 LINK spdk_nvme 00:04:54.147 LINK hello_fsdev 00:04:54.147 CC test/event/event_perf/event_perf.o 00:04:54.147 CXX test/cpp_headers/event.o 00:04:54.147 LINK hello_world 00:04:54.147 LINK hello_blob 00:04:54.147 LINK iscsi_fuzz 00:04:54.147 CC test/event/reactor_perf/reactor_perf.o 00:04:54.147 CC test/event/reactor/reactor.o 00:04:54.406 LINK event_perf 00:04:54.406 CXX test/cpp_headers/fd_group.o 00:04:54.406 CXX test/cpp_headers/fd.o 00:04:54.406 LINK reconnect 00:04:54.406 CXX test/cpp_headers/file.o 00:04:54.406 LINK reactor 00:04:54.406 LINK reactor_perf 00:04:54.406 LINK accel_perf 00:04:54.406 CC examples/blob/cli/blobcli.o 00:04:54.665 CXX test/cpp_headers/fsdev.o 00:04:54.665 CXX test/cpp_headers/fsdev_module.o 00:04:54.665 CC test/event/app_repeat/app_repeat.o 00:04:54.665 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:54.665 CC test/nvme/aer/aer.o 00:04:54.665 CC test/event/scheduler/scheduler.o 00:04:54.665 CC test/nvme/reset/reset.o 00:04:54.665 CXX test/cpp_headers/ftl.o 00:04:54.665 LINK app_repeat 00:04:54.924 CC test/accel/dif/dif.o 00:04:54.924 CC test/blobfs/mkfs/mkfs.o 00:04:54.924 LINK scheduler 00:04:54.924 LINK reset 00:04:54.924 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.924 LINK aer 00:04:54.924 CC test/lvol/esnap/esnap.o 00:04:54.924 CC test/nvme/sgl/sgl.o 00:04:54.924 LINK mkfs 00:04:54.924 LINK blobcli 00:04:55.184 CXX test/cpp_headers/gpt_spec.o 00:04:55.184 LINK nvme_manage 00:04:55.184 CC test/nvme/e2edp/nvme_dp.o 00:04:55.184 CXX test/cpp_headers/hexlify.o 00:04:55.184 CXX test/cpp_headers/histogram_data.o 00:04:55.184 CC examples/bdev/hello_world/hello_bdev.o 00:04:55.184 CC examples/bdev/bdevperf/bdevperf.o 00:04:55.184 LINK sgl 00:04:55.443 CC examples/nvme/arbitration/arbitration.o 00:04:55.443 CXX test/cpp_headers/idxd.o 00:04:55.443 CC test/nvme/overhead/overhead.o 00:04:55.443 CC test/nvme/err_injection/err_injection.o 00:04:55.443 LINK nvme_dp 00:04:55.443 LINK hello_bdev 00:04:55.443 CC test/nvme/startup/startup.o 00:04:55.443 CXX test/cpp_headers/idxd_spec.o 00:04:55.443 LINK dif 00:04:55.702 CXX test/cpp_headers/init.o 00:04:55.702 LINK err_injection 00:04:55.702 LINK arbitration 00:04:55.702 LINK startup 00:04:55.702 LINK overhead 00:04:55.702 CC test/nvme/reserve/reserve.o 00:04:55.702 CC test/nvme/simple_copy/simple_copy.o 00:04:55.702 CXX test/cpp_headers/ioat.o 00:04:55.702 CXX test/cpp_headers/ioat_spec.o 00:04:55.702 CXX test/cpp_headers/iscsi_spec.o 00:04:55.962 CXX test/cpp_headers/json.o 00:04:55.962 LINK reserve 00:04:55.962 CC examples/nvme/hotplug/hotplug.o 00:04:55.962 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.962 CXX test/cpp_headers/jsonrpc.o 00:04:55.962 LINK simple_copy 00:04:55.962 CXX test/cpp_headers/keyring.o 00:04:55.962 CC examples/nvme/abort/abort.o 00:04:56.221 CXX test/cpp_headers/keyring_module.o 00:04:56.221 LINK bdevperf 00:04:56.221 CC test/bdev/bdevio/bdevio.o 00:04:56.221 CXX test/cpp_headers/likely.o 00:04:56.221 LINK cmb_copy 00:04:56.221 LINK hotplug 00:04:56.221 CC test/nvme/connect_stress/connect_stress.o 00:04:56.221 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.221 CXX test/cpp_headers/log.o 00:04:56.481 CXX test/cpp_headers/lvol.o 00:04:56.481 CC test/nvme/boot_partition/boot_partition.o 00:04:56.481 CC test/nvme/compliance/nvme_compliance.o 00:04:56.481 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.481 LINK pmr_persistence 00:04:56.481 LINK connect_stress 00:04:56.481 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.481 LINK abort 00:04:56.481 CXX test/cpp_headers/md5.o 00:04:56.481 LINK boot_partition 00:04:56.481 LINK bdevio 00:04:56.481 CXX test/cpp_headers/memory.o 00:04:56.481 CXX test/cpp_headers/mmio.o 00:04:56.481 LINK fused_ordering 00:04:56.481 LINK doorbell_aers 00:04:56.741 CXX test/cpp_headers/nbd.o 00:04:56.741 CXX test/cpp_headers/net.o 00:04:56.741 CXX test/cpp_headers/notify.o 00:04:56.741 CXX test/cpp_headers/nvme.o 00:04:56.741 CXX test/cpp_headers/nvme_intel.o 00:04:56.741 LINK nvme_compliance 00:04:56.741 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.741 CC examples/nvmf/nvmf/nvmf.o 00:04:56.741 CC test/nvme/fdp/fdp.o 00:04:56.741 CC test/nvme/cuse/cuse.o 00:04:56.741 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.741 CXX test/cpp_headers/nvme_spec.o 00:04:56.741 CXX test/cpp_headers/nvme_zns.o 00:04:57.001 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.001 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.001 CXX test/cpp_headers/nvmf.o 00:04:57.001 CXX test/cpp_headers/nvmf_spec.o 00:04:57.001 CXX test/cpp_headers/nvmf_transport.o 00:04:57.001 CXX test/cpp_headers/opal.o 00:04:57.001 LINK nvmf 00:04:57.001 CXX test/cpp_headers/opal_spec.o 00:04:57.001 CXX test/cpp_headers/pci_ids.o 00:04:57.001 CXX test/cpp_headers/pipe.o 00:04:57.001 LINK fdp 00:04:57.001 CXX test/cpp_headers/queue.o 00:04:57.260 CXX test/cpp_headers/reduce.o 00:04:57.260 CXX test/cpp_headers/rpc.o 00:04:57.260 CXX test/cpp_headers/scheduler.o 00:04:57.260 CXX test/cpp_headers/scsi.o 00:04:57.260 CXX test/cpp_headers/scsi_spec.o 00:04:57.260 CXX test/cpp_headers/sock.o 00:04:57.260 CXX test/cpp_headers/stdinc.o 00:04:57.260 CXX test/cpp_headers/string.o 00:04:57.260 CXX test/cpp_headers/thread.o 00:04:57.260 CXX test/cpp_headers/trace.o 00:04:57.260 CXX test/cpp_headers/trace_parser.o 00:04:57.260 CXX test/cpp_headers/tree.o 00:04:57.260 CXX test/cpp_headers/ublk.o 00:04:57.519 CXX test/cpp_headers/util.o 00:04:57.519 CXX test/cpp_headers/uuid.o 00:04:57.519 CXX test/cpp_headers/version.o 00:04:57.519 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.519 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.519 CXX test/cpp_headers/vhost.o 00:04:57.519 CXX test/cpp_headers/vmd.o 00:04:57.519 CXX test/cpp_headers/xor.o 00:04:57.519 CXX test/cpp_headers/zipf.o 00:04:58.088 LINK cuse 00:05:00.627 LINK esnap 00:05:00.887 00:05:00.887 real 1m17.179s 00:05:00.887 user 5m44.060s 00:05:00.887 sys 1m12.594s 00:05:00.887 22:49:39 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:00.887 22:49:39 make -- common/autotest_common.sh@10 -- $ set +x 00:05:00.887 ************************************ 00:05:00.887 END TEST make 00:05:00.887 ************************************ 00:05:00.887 22:49:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:00.887 22:49:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:00.887 22:49:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:00.887 22:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.887 22:49:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:00.887 22:49:39 -- pm/common@44 -- $ pid=6212 00:05:00.887 22:49:39 -- pm/common@50 -- $ kill -TERM 6212 00:05:00.887 22:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.887 22:49:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:00.887 22:49:39 -- pm/common@44 -- $ pid=6214 00:05:00.887 22:49:39 -- pm/common@50 -- $ kill -TERM 6214 00:05:00.887 22:49:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:00.887 22:49:40 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:01.146 22:49:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.146 22:49:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.146 22:49:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.146 22:49:40 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.146 22:49:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.146 22:49:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.146 22:49:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.146 22:49:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.146 22:49:40 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.146 22:49:40 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.146 22:49:40 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.146 22:49:40 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.146 22:49:40 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.146 22:49:40 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.146 22:49:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.146 22:49:40 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.146 22:49:40 -- scripts/common.sh@345 -- # : 1 00:05:01.146 22:49:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.146 22:49:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.146 22:49:40 -- scripts/common.sh@365 -- # decimal 1 00:05:01.146 22:49:40 -- scripts/common.sh@353 -- # local d=1 00:05:01.146 22:49:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.146 22:49:40 -- scripts/common.sh@355 -- # echo 1 00:05:01.146 22:49:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.146 22:49:40 -- scripts/common.sh@366 -- # decimal 2 00:05:01.146 22:49:40 -- scripts/common.sh@353 -- # local d=2 00:05:01.146 22:49:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.146 22:49:40 -- scripts/common.sh@355 -- # echo 2 00:05:01.146 22:49:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.146 22:49:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.146 22:49:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.146 22:49:40 -- scripts/common.sh@368 -- # return 0 00:05:01.146 22:49:40 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.147 22:49:40 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.147 --rc genhtml_branch_coverage=1 00:05:01.147 --rc genhtml_function_coverage=1 00:05:01.147 --rc genhtml_legend=1 00:05:01.147 --rc geninfo_all_blocks=1 00:05:01.147 --rc geninfo_unexecuted_blocks=1 00:05:01.147 00:05:01.147 ' 00:05:01.147 22:49:40 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.147 --rc genhtml_branch_coverage=1 00:05:01.147 --rc genhtml_function_coverage=1 00:05:01.147 --rc genhtml_legend=1 00:05:01.147 --rc geninfo_all_blocks=1 00:05:01.147 --rc geninfo_unexecuted_blocks=1 00:05:01.147 00:05:01.147 ' 00:05:01.147 22:49:40 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.147 --rc genhtml_branch_coverage=1 00:05:01.147 --rc genhtml_function_coverage=1 00:05:01.147 --rc genhtml_legend=1 00:05:01.147 --rc geninfo_all_blocks=1 00:05:01.147 --rc geninfo_unexecuted_blocks=1 00:05:01.147 00:05:01.147 ' 00:05:01.147 22:49:40 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.147 --rc genhtml_branch_coverage=1 00:05:01.147 --rc genhtml_function_coverage=1 00:05:01.147 --rc genhtml_legend=1 00:05:01.147 --rc geninfo_all_blocks=1 00:05:01.147 --rc geninfo_unexecuted_blocks=1 00:05:01.147 00:05:01.147 ' 00:05:01.147 22:49:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.147 22:49:40 -- nvmf/common.sh@7 -- # uname -s 00:05:01.147 22:49:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.147 22:49:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.147 22:49:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.147 22:49:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.147 22:49:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.147 22:49:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.147 22:49:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.147 22:49:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.147 22:49:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.147 22:49:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.147 22:49:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:05:01.147 22:49:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:05:01.147 22:49:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.147 22:49:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.147 22:49:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.147 22:49:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.147 22:49:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.147 22:49:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.147 22:49:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.147 22:49:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.147 22:49:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.147 22:49:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.147 22:49:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.147 22:49:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.147 22:49:40 -- paths/export.sh@5 -- # export PATH 00:05:01.147 22:49:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.147 22:49:40 -- nvmf/common.sh@51 -- # : 0 00:05:01.147 22:49:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.147 22:49:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.147 22:49:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.147 22:49:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.147 22:49:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.147 22:49:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.147 22:49:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.147 22:49:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.147 22:49:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.147 22:49:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.147 22:49:40 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.407 22:49:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.407 22:49:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.407 22:49:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.407 22:49:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.407 22:49:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.407 22:49:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.407 22:49:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.407 22:49:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.407 22:49:40 -- spdk/autotest.sh@48 -- # udevadm_pid=68350 00:05:01.407 22:49:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.407 22:49:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.407 22:49:40 -- pm/common@17 -- # local monitor 00:05:01.407 22:49:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.407 22:49:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.407 22:49:40 -- pm/common@25 -- # sleep 1 00:05:01.407 22:49:40 -- pm/common@21 -- # date +%s 00:05:01.407 22:49:40 -- pm/common@21 -- # date +%s 00:05:01.407 22:49:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732661380 00:05:01.407 22:49:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732661380 00:05:01.407 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732661380_collect-vmstat.pm.log 00:05:01.407 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732661380_collect-cpu-load.pm.log 00:05:02.343 22:49:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.343 22:49:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.343 22:49:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.343 22:49:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.343 22:49:41 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.343 22:49:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.343 22:49:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.343 22:49:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.343 22:49:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.343 22:49:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.343 22:49:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.343 22:49:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.343 22:49:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.343 22:49:41 -- common/autotest_common.sh@1457 -- # uname 00:05:02.343 22:49:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.343 22:49:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.343 22:49:41 -- common/autotest_common.sh@1477 -- # uname 00:05:02.343 22:49:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.343 22:49:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.343 22:49:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.601 lcov: LCOV version 1.15 00:05:02.601 22:49:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.584 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:17.584 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:32.483 22:50:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:32.483 22:50:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.483 22:50:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.483 22:50:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:32.483 22:50:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.483 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:32.483 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:32.483 22:50:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:32.483 22:50:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:32.483 22:50:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:32.483 22:50:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:32.483 22:50:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.483 22:50:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:32.483 22:50:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:32.483 22:50:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.483 22:50:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:32.483 22:50:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:32.483 22:50:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.483 22:50:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:32.483 22:50:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:32.483 22:50:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.483 22:50:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:32.483 22:50:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:32.483 22:50:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.483 22:50:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.483 22:50:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:32.483 22:50:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.483 22:50:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.483 22:50:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:32.483 22:50:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:32.483 22:50:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.483 No valid GPT data, bailing 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # pt= 00:05:32.483 22:50:10 -- scripts/common.sh@395 -- # return 1 00:05:32.483 22:50:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.483 1+0 records in 00:05:32.483 1+0 records out 00:05:32.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599521 s, 175 MB/s 00:05:32.483 22:50:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.483 22:50:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.483 22:50:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:32.483 22:50:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:32.483 22:50:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.483 No valid GPT data, bailing 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # pt= 00:05:32.483 22:50:10 -- scripts/common.sh@395 -- # return 1 00:05:32.483 22:50:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.483 1+0 records in 00:05:32.483 1+0 records out 00:05:32.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665675 s, 158 MB/s 00:05:32.483 22:50:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.483 22:50:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.483 22:50:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:32.483 22:50:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:32.483 22:50:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:32.483 No valid GPT data, bailing 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # pt= 00:05:32.483 22:50:10 -- scripts/common.sh@395 -- # return 1 00:05:32.483 22:50:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:32.483 1+0 records in 00:05:32.483 1+0 records out 00:05:32.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392509 s, 267 MB/s 00:05:32.483 22:50:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.483 22:50:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.483 22:50:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:32.483 22:50:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:32.483 22:50:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:32.483 No valid GPT data, bailing 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:32.483 22:50:10 -- scripts/common.sh@394 -- # pt= 00:05:32.483 22:50:10 -- scripts/common.sh@395 -- # return 1 00:05:32.483 22:50:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:32.483 1+0 records in 00:05:32.483 1+0 records out 00:05:32.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665049 s, 158 MB/s 00:05:32.483 22:50:10 -- spdk/autotest.sh@105 -- # sync 00:05:32.483 22:50:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.483 22:50:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.483 22:50:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.048 22:50:13 -- spdk/autotest.sh@111 -- # uname -s 00:05:35.048 22:50:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:35.048 22:50:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:35.048 22:50:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.992 Hugepages 00:05:35.992 node hugesize free / total 00:05:35.992 node0 1048576kB 0 / 0 00:05:35.992 node0 2048kB 0 / 0 00:05:35.992 00:05:35.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.993 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.252 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:36.252 22:50:15 -- spdk/autotest.sh@117 -- # uname -s 00:05:36.252 22:50:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:36.252 22:50:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:36.252 22:50:15 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.083 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.083 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.083 22:50:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:38.466 22:50:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:38.466 22:50:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:38.466 22:50:17 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.466 22:50:17 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:38.466 22:50:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.466 22:50:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.466 22:50:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.466 22:50:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.466 22:50:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.466 22:50:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.466 22:50:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.466 22:50:17 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.726 Waiting for block devices as requested 00:05:38.986 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.986 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.986 22:50:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:38.986 22:50:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:38.986 22:50:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:38.986 22:50:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:38.986 22:50:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1543 -- # continue 00:05:38.986 22:50:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:38.986 22:50:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:38.986 22:50:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:38.986 22:50:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:38.986 22:50:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:38.986 22:50:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:38.986 22:50:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.246 22:50:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:39.246 22:50:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:39.246 22:50:18 -- common/autotest_common.sh@1543 -- # continue 00:05:39.246 22:50:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:39.246 22:50:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.246 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:39.246 22:50:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:39.246 22:50:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.246 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:39.246 22:50:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.184 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.184 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.184 22:50:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:40.184 22:50:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.184 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.184 22:50:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:40.184 22:50:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:40.184 22:50:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.184 22:50:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:40.184 22:50:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:40.184 22:50:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:40.184 22:50:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:40.184 22:50:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:40.184 22:50:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:40.184 22:50:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:40.184 22:50:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.184 22:50:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.184 22:50:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:40.444 22:50:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:40.444 22:50:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:40.444 22:50:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:40.444 22:50:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:40.444 22:50:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:40.444 22:50:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.444 22:50:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:40.444 22:50:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:40.444 22:50:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:40.444 22:50:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.444 22:50:19 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:40.444 22:50:19 -- common/autotest_common.sh@1572 -- # return 0 00:05:40.444 22:50:19 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:40.444 22:50:19 -- common/autotest_common.sh@1580 -- # return 0 00:05:40.444 22:50:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:40.444 22:50:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:40.444 22:50:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.444 22:50:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.444 22:50:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:40.444 22:50:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.444 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.444 22:50:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:40.444 22:50:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.444 22:50:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.444 22:50:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.444 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.444 ************************************ 00:05:40.444 START TEST env 00:05:40.444 ************************************ 00:05:40.444 22:50:19 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.444 * Looking for test storage... 00:05:40.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:40.444 22:50:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.444 22:50:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.444 22:50:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.703 22:50:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.703 22:50:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.703 22:50:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.703 22:50:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.703 22:50:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.703 22:50:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.703 22:50:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.703 22:50:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.703 22:50:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.703 22:50:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.703 22:50:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.703 22:50:19 env -- scripts/common.sh@344 -- # case "$op" in 00:05:40.703 22:50:19 env -- scripts/common.sh@345 -- # : 1 00:05:40.703 22:50:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.703 22:50:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.703 22:50:19 env -- scripts/common.sh@365 -- # decimal 1 00:05:40.703 22:50:19 env -- scripts/common.sh@353 -- # local d=1 00:05:40.703 22:50:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.703 22:50:19 env -- scripts/common.sh@355 -- # echo 1 00:05:40.703 22:50:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.703 22:50:19 env -- scripts/common.sh@366 -- # decimal 2 00:05:40.703 22:50:19 env -- scripts/common.sh@353 -- # local d=2 00:05:40.703 22:50:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.703 22:50:19 env -- scripts/common.sh@355 -- # echo 2 00:05:40.703 22:50:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.703 22:50:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.703 22:50:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.703 22:50:19 env -- scripts/common.sh@368 -- # return 0 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 22:50:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.703 22:50:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.703 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.703 ************************************ 00:05:40.703 START TEST env_memory 00:05:40.703 ************************************ 00:05:40.703 22:50:19 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.703 00:05:40.703 00:05:40.703 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.703 http://cunit.sourceforge.net/ 00:05:40.704 00:05:40.704 00:05:40.704 Suite: memory 00:05:40.704 Test: alloc and free memory map ...[2024-11-26 22:50:19.704771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.704 passed 00:05:40.704 Test: mem map translation ...[2024-11-26 22:50:19.745323] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:40.704 [2024-11-26 22:50:19.745390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:40.704 [2024-11-26 22:50:19.745447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:40.704 [2024-11-26 22:50:19.745478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:40.704 passed 00:05:40.704 Test: mem map registration ...[2024-11-26 22:50:19.809422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:40.704 [2024-11-26 22:50:19.809460] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:40.964 passed 00:05:40.964 Test: mem map adjacent registrations ...passed 00:05:40.964 00:05:40.964 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.964 suites 1 1 n/a 0 0 00:05:40.964 tests 4 4 4 0 0 00:05:40.964 asserts 152 152 152 0 n/a 00:05:40.964 00:05:40.964 Elapsed time = 0.230 seconds 00:05:40.964 00:05:40.964 real 0m0.283s 00:05:40.964 user 0m0.249s 00:05:40.964 sys 0m0.023s 00:05:40.964 22:50:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.964 22:50:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:40.964 ************************************ 00:05:40.964 END TEST env_memory 00:05:40.964 ************************************ 00:05:40.964 22:50:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:40.964 22:50:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.964 22:50:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.964 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.964 ************************************ 00:05:40.964 START TEST env_vtophys 00:05:40.964 ************************************ 00:05:40.964 22:50:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:40.964 EAL: lib.eal log level changed from notice to debug 00:05:40.964 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 1 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 2 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 3 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 4 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 5 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 6 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 7 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 8 as core 0 on socket 0 00:05:40.964 EAL: Detected lcore 9 as core 0 on socket 0 00:05:40.964 EAL: Maximum logical cores by configuration: 128 00:05:40.964 EAL: Detected CPU lcores: 10 00:05:40.964 EAL: Detected NUMA nodes: 1 00:05:40.964 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:40.964 EAL: Detected shared linkage of DPDK 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:40.964 EAL: Registered [vdev] bus. 00:05:40.964 EAL: bus.vdev log level changed from disabled to notice 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:40.964 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:40.964 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:40.964 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:40.964 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.964 EAL: No shared files mode enabled, IPC is disabled 00:05:40.964 EAL: Selected IOVA mode 'PA' 00:05:40.964 EAL: Probing VFIO support... 00:05:40.964 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:40.964 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:40.964 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.964 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.964 EAL: Setting up physically contiguous memory... 00:05:40.964 EAL: Setting maximum number of open files to 524288 00:05:40.964 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.964 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.964 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.964 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.964 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.964 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.964 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.964 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.964 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.964 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.964 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.964 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.964 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.964 EAL: Hugepages will be freed exactly as allocated. 00:05:40.964 EAL: No shared files mode enabled, IPC is disabled 00:05:40.964 EAL: No shared files mode enabled, IPC is disabled 00:05:41.224 EAL: TSC frequency is ~2294600 KHz 00:05:41.224 EAL: Main lcore 0 is ready (tid=7ff998a53a40;cpuset=[0]) 00:05:41.224 EAL: Trying to obtain current memory policy. 00:05:41.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.224 EAL: Restoring previous memory policy: 0 00:05:41.224 EAL: request: mp_malloc_sync 00:05:41.224 EAL: No shared files mode enabled, IPC is disabled 00:05:41.224 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.224 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:41.224 EAL: No shared files mode enabled, IPC is disabled 00:05:41.224 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.224 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:41.224 00:05:41.224 00:05:41.224 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.224 http://cunit.sourceforge.net/ 00:05:41.224 00:05:41.224 00:05:41.224 Suite: components_suite 00:05:41.485 Test: vtophys_malloc_test ...passed 00:05:41.485 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 4MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was shrunk by 4MB 00:05:41.485 EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 6MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was shrunk by 6MB 00:05:41.485 EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 10MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was shrunk by 10MB 00:05:41.485 EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 18MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was shrunk by 18MB 00:05:41.485 EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 34MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was shrunk by 34MB 00:05:41.485 EAL: Trying to obtain current memory policy. 00:05:41.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.485 EAL: Restoring previous memory policy: 4 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.485 EAL: request: mp_malloc_sync 00:05:41.485 EAL: No shared files mode enabled, IPC is disabled 00:05:41.485 EAL: Heap on socket 0 was expanded by 66MB 00:05:41.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.745 EAL: request: mp_malloc_sync 00:05:41.745 EAL: No shared files mode enabled, IPC is disabled 00:05:41.745 EAL: Heap on socket 0 was shrunk by 66MB 00:05:41.745 EAL: Trying to obtain current memory policy. 00:05:41.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.745 EAL: Restoring previous memory policy: 4 00:05:41.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.745 EAL: request: mp_malloc_sync 00:05:41.745 EAL: No shared files mode enabled, IPC is disabled 00:05:41.745 EAL: Heap on socket 0 was expanded by 130MB 00:05:41.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.745 EAL: request: mp_malloc_sync 00:05:41.745 EAL: No shared files mode enabled, IPC is disabled 00:05:41.745 EAL: Heap on socket 0 was shrunk by 130MB 00:05:41.745 EAL: Trying to obtain current memory policy. 00:05:41.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.745 EAL: Restoring previous memory policy: 4 00:05:41.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.745 EAL: request: mp_malloc_sync 00:05:41.745 EAL: No shared files mode enabled, IPC is disabled 00:05:41.745 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.004 EAL: request: mp_malloc_sync 00:05:42.004 EAL: No shared files mode enabled, IPC is disabled 00:05:42.004 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.004 EAL: Trying to obtain current memory policy. 00:05:42.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.264 EAL: Restoring previous memory policy: 4 00:05:42.264 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.264 EAL: request: mp_malloc_sync 00:05:42.264 EAL: No shared files mode enabled, IPC is disabled 00:05:42.264 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.264 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.524 EAL: request: mp_malloc_sync 00:05:42.524 EAL: No shared files mode enabled, IPC is disabled 00:05:42.524 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.524 EAL: Trying to obtain current memory policy. 00:05:42.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.784 EAL: Restoring previous memory policy: 4 00:05:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.784 EAL: request: mp_malloc_sync 00:05:42.784 EAL: No shared files mode enabled, IPC is disabled 00:05:42.784 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.614 passed 00:05:43.614 00:05:43.614 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.614 suites 1 1 n/a 0 0 00:05:43.614 tests 2 2 2 0 0 00:05:43.614 asserts 5316 5316 5316 0 n/a 00:05:43.614 00:05:43.614 Elapsed time = 2.235 seconds 00:05:43.614 EAL: request: mp_malloc_sync 00:05:43.614 EAL: No shared files mode enabled, IPC is disabled 00:05:43.614 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.614 EAL: request: mp_malloc_sync 00:05:43.614 EAL: No shared files mode enabled, IPC is disabled 00:05:43.614 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.614 EAL: No shared files mode enabled, IPC is disabled 00:05:43.614 EAL: No shared files mode enabled, IPC is disabled 00:05:43.614 EAL: No shared files mode enabled, IPC is disabled 00:05:43.614 00:05:43.614 real 0m2.529s 00:05:43.614 user 0m1.351s 00:05:43.614 sys 0m1.039s 00:05:43.614 22:50:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.614 22:50:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.614 ************************************ 00:05:43.614 END TEST env_vtophys 00:05:43.614 ************************************ 00:05:43.614 22:50:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.614 22:50:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.614 22:50:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.614 22:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.614 ************************************ 00:05:43.615 START TEST env_pci 00:05:43.615 ************************************ 00:05:43.615 22:50:22 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.615 00:05:43.615 00:05:43.615 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.615 http://cunit.sourceforge.net/ 00:05:43.615 00:05:43.615 00:05:43.615 Suite: pci 00:05:43.615 Test: pci_hook ...[2024-11-26 22:50:22.632623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70594 has claimed it 00:05:43.615 passed 00:05:43.615 00:05:43.615 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.615 suites 1 1 n/a 0 0 00:05:43.615 tests 1 1 1 0 0 00:05:43.615 asserts 25 25 25 0 n/a 00:05:43.615 00:05:43.615 Elapsed time = 0.009 seconds 00:05:43.615 EAL: Cannot find device (10000:00:01.0) 00:05:43.615 EAL: Failed to attach device on primary process 00:05:43.615 00:05:43.615 real 0m0.125s 00:05:43.615 user 0m0.048s 00:05:43.615 sys 0m0.074s 00:05:43.615 22:50:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.615 22:50:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.615 ************************************ 00:05:43.615 END TEST env_pci 00:05:43.615 ************************************ 00:05:43.875 22:50:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.875 22:50:22 env -- env/env.sh@15 -- # uname 00:05:43.875 22:50:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.875 22:50:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.875 22:50:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.875 22:50:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:43.875 22:50:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.875 22:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.875 ************************************ 00:05:43.875 START TEST env_dpdk_post_init 00:05:43.875 ************************************ 00:05:43.875 22:50:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.875 EAL: Detected CPU lcores: 10 00:05:43.875 EAL: Detected NUMA nodes: 1 00:05:43.875 EAL: Detected shared linkage of DPDK 00:05:43.875 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.875 EAL: Selected IOVA mode 'PA' 00:05:44.135 Starting DPDK initialization... 00:05:44.135 Starting SPDK post initialization... 00:05:44.135 SPDK NVMe probe 00:05:44.135 Attaching to 0000:00:10.0 00:05:44.135 Attaching to 0000:00:11.0 00:05:44.135 Attached to 0000:00:10.0 00:05:44.135 Attached to 0000:00:11.0 00:05:44.135 Cleaning up... 00:05:44.135 00:05:44.135 real 0m0.304s 00:05:44.135 user 0m0.093s 00:05:44.135 sys 0m0.111s 00:05:44.135 22:50:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.135 22:50:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.135 ************************************ 00:05:44.135 END TEST env_dpdk_post_init 00:05:44.135 ************************************ 00:05:44.135 22:50:23 env -- env/env.sh@26 -- # uname 00:05:44.135 22:50:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.135 22:50:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.135 22:50:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.135 22:50:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.135 22:50:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.135 ************************************ 00:05:44.135 START TEST env_mem_callbacks 00:05:44.135 ************************************ 00:05:44.135 22:50:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.135 EAL: Detected CPU lcores: 10 00:05:44.135 EAL: Detected NUMA nodes: 1 00:05:44.135 EAL: Detected shared linkage of DPDK 00:05:44.135 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.396 EAL: Selected IOVA mode 'PA' 00:05:44.396 00:05:44.396 00:05:44.396 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.396 http://cunit.sourceforge.net/ 00:05:44.396 00:05:44.396 00:05:44.396 Suite: memory 00:05:44.396 Test: test ... 00:05:44.396 register 0x200000200000 2097152 00:05:44.396 malloc 3145728 00:05:44.396 register 0x200000400000 4194304 00:05:44.396 buf 0x200000500000 len 3145728 PASSED 00:05:44.396 malloc 64 00:05:44.396 buf 0x2000004fff40 len 64 PASSED 00:05:44.396 malloc 4194304 00:05:44.396 register 0x200000800000 6291456 00:05:44.396 buf 0x200000a00000 len 4194304 PASSED 00:05:44.396 free 0x200000500000 3145728 00:05:44.396 free 0x2000004fff40 64 00:05:44.396 unregister 0x200000400000 4194304 PASSED 00:05:44.396 free 0x200000a00000 4194304 00:05:44.396 unregister 0x200000800000 6291456 PASSED 00:05:44.396 malloc 8388608 00:05:44.396 register 0x200000400000 10485760 00:05:44.396 buf 0x200000600000 len 8388608 PASSED 00:05:44.396 free 0x200000600000 8388608 00:05:44.396 unregister 0x200000400000 10485760 PASSED 00:05:44.396 passed 00:05:44.396 00:05:44.396 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.396 suites 1 1 n/a 0 0 00:05:44.396 tests 1 1 1 0 0 00:05:44.396 asserts 15 15 15 0 n/a 00:05:44.396 00:05:44.396 Elapsed time = 0.014 seconds 00:05:44.396 00:05:44.396 real 0m0.227s 00:05:44.396 user 0m0.040s 00:05:44.396 sys 0m0.084s 00:05:44.396 22:50:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.396 22:50:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.396 ************************************ 00:05:44.396 END TEST env_mem_callbacks 00:05:44.396 ************************************ 00:05:44.396 00:05:44.396 real 0m4.089s 00:05:44.396 user 0m2.012s 00:05:44.396 sys 0m1.718s 00:05:44.396 22:50:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.396 22:50:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.396 ************************************ 00:05:44.396 END TEST env 00:05:44.396 ************************************ 00:05:44.657 22:50:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.657 22:50:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.657 22:50:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.657 22:50:23 -- common/autotest_common.sh@10 -- # set +x 00:05:44.657 ************************************ 00:05:44.657 START TEST rpc 00:05:44.657 ************************************ 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.657 * Looking for test storage... 00:05:44.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.657 22:50:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.657 22:50:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.657 22:50:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.657 22:50:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.657 22:50:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.657 22:50:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.657 22:50:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.657 22:50:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.657 22:50:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.657 22:50:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.657 22:50:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.657 22:50:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.657 22:50:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.657 22:50:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.657 22:50:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.657 --rc genhtml_branch_coverage=1 00:05:44.657 --rc genhtml_function_coverage=1 00:05:44.657 --rc genhtml_legend=1 00:05:44.657 --rc geninfo_all_blocks=1 00:05:44.657 --rc geninfo_unexecuted_blocks=1 00:05:44.657 00:05:44.657 ' 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.657 --rc genhtml_branch_coverage=1 00:05:44.657 --rc genhtml_function_coverage=1 00:05:44.657 --rc genhtml_legend=1 00:05:44.657 --rc geninfo_all_blocks=1 00:05:44.657 --rc geninfo_unexecuted_blocks=1 00:05:44.657 00:05:44.657 ' 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.657 --rc genhtml_branch_coverage=1 00:05:44.657 --rc genhtml_function_coverage=1 00:05:44.657 --rc genhtml_legend=1 00:05:44.657 --rc geninfo_all_blocks=1 00:05:44.657 --rc geninfo_unexecuted_blocks=1 00:05:44.657 00:05:44.657 ' 00:05:44.657 22:50:23 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.657 --rc genhtml_branch_coverage=1 00:05:44.657 --rc genhtml_function_coverage=1 00:05:44.657 --rc genhtml_legend=1 00:05:44.657 --rc geninfo_all_blocks=1 00:05:44.657 --rc geninfo_unexecuted_blocks=1 00:05:44.657 00:05:44.657 ' 00:05:44.916 22:50:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70727 00:05:44.916 22:50:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.916 22:50:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.916 22:50:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70727 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 70727 ']' 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.916 22:50:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.916 [2024-11-26 22:50:23.895146] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:44.916 [2024-11-26 22:50:23.895303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:05:44.916 [2024-11-26 22:50:24.036154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.176 [2024-11-26 22:50:24.064125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.177 [2024-11-26 22:50:24.090401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.177 [2024-11-26 22:50:24.090463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70727' to capture a snapshot of events at runtime. 00:05:45.177 [2024-11-26 22:50:24.090474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.177 [2024-11-26 22:50:24.090485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.177 [2024-11-26 22:50:24.090493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70727 for offline analysis/debug. 00:05:45.177 [2024-11-26 22:50:24.090885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.747 22:50:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.747 22:50:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.747 22:50:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.747 22:50:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.747 22:50:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.747 22:50:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.747 22:50:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.747 22:50:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.747 22:50:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.747 ************************************ 00:05:45.747 START TEST rpc_integrity 00:05:45.747 ************************************ 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.747 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.747 { 00:05:45.747 "name": "Malloc0", 00:05:45.747 "aliases": [ 00:05:45.747 "5433db6a-64b0-4b8a-a128-fc3441e06442" 00:05:45.747 ], 00:05:45.747 "product_name": "Malloc disk", 00:05:45.747 "block_size": 512, 00:05:45.747 "num_blocks": 16384, 00:05:45.747 "uuid": "5433db6a-64b0-4b8a-a128-fc3441e06442", 00:05:45.747 "assigned_rate_limits": { 00:05:45.747 "rw_ios_per_sec": 0, 00:05:45.747 "rw_mbytes_per_sec": 0, 00:05:45.747 "r_mbytes_per_sec": 0, 00:05:45.747 "w_mbytes_per_sec": 0 00:05:45.747 }, 00:05:45.747 "claimed": false, 00:05:45.747 "zoned": false, 00:05:45.747 "supported_io_types": { 00:05:45.747 "read": true, 00:05:45.747 "write": true, 00:05:45.747 "unmap": true, 00:05:45.747 "flush": true, 00:05:45.747 "reset": true, 00:05:45.747 "nvme_admin": false, 00:05:45.747 "nvme_io": false, 00:05:45.747 "nvme_io_md": false, 00:05:45.747 "write_zeroes": true, 00:05:45.747 "zcopy": true, 00:05:45.747 "get_zone_info": false, 00:05:45.747 "zone_management": false, 00:05:45.747 "zone_append": false, 00:05:45.747 "compare": false, 00:05:45.747 "compare_and_write": false, 00:05:45.747 "abort": true, 00:05:45.747 "seek_hole": false, 00:05:45.747 "seek_data": false, 00:05:45.747 "copy": true, 00:05:45.747 "nvme_iov_md": false 00:05:45.747 }, 00:05:45.747 "memory_domains": [ 00:05:45.747 { 00:05:45.747 "dma_device_id": "system", 00:05:45.747 "dma_device_type": 1 00:05:45.747 }, 00:05:45.747 { 00:05:45.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.747 "dma_device_type": 2 00:05:45.747 } 00:05:45.747 ], 00:05:45.747 "driver_specific": {} 00:05:45.747 } 00:05:45.747 ]' 00:05:45.747 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.007 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.007 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.007 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.007 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.007 [2024-11-26 22:50:24.885728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.007 [2024-11-26 22:50:24.885818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.007 [2024-11-26 22:50:24.885844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:46.007 [2024-11-26 22:50:24.885862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.008 [2024-11-26 22:50:24.888168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.008 [2024-11-26 22:50:24.888206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.008 Passthru0 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.008 { 00:05:46.008 "name": "Malloc0", 00:05:46.008 "aliases": [ 00:05:46.008 "5433db6a-64b0-4b8a-a128-fc3441e06442" 00:05:46.008 ], 00:05:46.008 "product_name": "Malloc disk", 00:05:46.008 "block_size": 512, 00:05:46.008 "num_blocks": 16384, 00:05:46.008 "uuid": "5433db6a-64b0-4b8a-a128-fc3441e06442", 00:05:46.008 "assigned_rate_limits": { 00:05:46.008 "rw_ios_per_sec": 0, 00:05:46.008 "rw_mbytes_per_sec": 0, 00:05:46.008 "r_mbytes_per_sec": 0, 00:05:46.008 "w_mbytes_per_sec": 0 00:05:46.008 }, 00:05:46.008 "claimed": true, 00:05:46.008 "claim_type": "exclusive_write", 00:05:46.008 "zoned": false, 00:05:46.008 "supported_io_types": { 00:05:46.008 "read": true, 00:05:46.008 "write": true, 00:05:46.008 "unmap": true, 00:05:46.008 "flush": true, 00:05:46.008 "reset": true, 00:05:46.008 "nvme_admin": false, 00:05:46.008 "nvme_io": false, 00:05:46.008 "nvme_io_md": false, 00:05:46.008 "write_zeroes": true, 00:05:46.008 "zcopy": true, 00:05:46.008 "get_zone_info": false, 00:05:46.008 "zone_management": false, 00:05:46.008 "zone_append": false, 00:05:46.008 "compare": false, 00:05:46.008 "compare_and_write": false, 00:05:46.008 "abort": true, 00:05:46.008 "seek_hole": false, 00:05:46.008 "seek_data": false, 00:05:46.008 "copy": true, 00:05:46.008 "nvme_iov_md": false 00:05:46.008 }, 00:05:46.008 "memory_domains": [ 00:05:46.008 { 00:05:46.008 "dma_device_id": "system", 00:05:46.008 "dma_device_type": 1 00:05:46.008 }, 00:05:46.008 { 00:05:46.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.008 "dma_device_type": 2 00:05:46.008 } 00:05:46.008 ], 00:05:46.008 "driver_specific": {} 00:05:46.008 }, 00:05:46.008 { 00:05:46.008 "name": "Passthru0", 00:05:46.008 "aliases": [ 00:05:46.008 "ab7f5342-207f-589a-94eb-431c8f20b903" 00:05:46.008 ], 00:05:46.008 "product_name": "passthru", 00:05:46.008 "block_size": 512, 00:05:46.008 "num_blocks": 16384, 00:05:46.008 "uuid": "ab7f5342-207f-589a-94eb-431c8f20b903", 00:05:46.008 "assigned_rate_limits": { 00:05:46.008 "rw_ios_per_sec": 0, 00:05:46.008 "rw_mbytes_per_sec": 0, 00:05:46.008 "r_mbytes_per_sec": 0, 00:05:46.008 "w_mbytes_per_sec": 0 00:05:46.008 }, 00:05:46.008 "claimed": false, 00:05:46.008 "zoned": false, 00:05:46.008 "supported_io_types": { 00:05:46.008 "read": true, 00:05:46.008 "write": true, 00:05:46.008 "unmap": true, 00:05:46.008 "flush": true, 00:05:46.008 "reset": true, 00:05:46.008 "nvme_admin": false, 00:05:46.008 "nvme_io": false, 00:05:46.008 "nvme_io_md": false, 00:05:46.008 "write_zeroes": true, 00:05:46.008 "zcopy": true, 00:05:46.008 "get_zone_info": false, 00:05:46.008 "zone_management": false, 00:05:46.008 "zone_append": false, 00:05:46.008 "compare": false, 00:05:46.008 "compare_and_write": false, 00:05:46.008 "abort": true, 00:05:46.008 "seek_hole": false, 00:05:46.008 "seek_data": false, 00:05:46.008 "copy": true, 00:05:46.008 "nvme_iov_md": false 00:05:46.008 }, 00:05:46.008 "memory_domains": [ 00:05:46.008 { 00:05:46.008 "dma_device_id": "system", 00:05:46.008 "dma_device_type": 1 00:05:46.008 }, 00:05:46.008 { 00:05:46.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.008 "dma_device_type": 2 00:05:46.008 } 00:05:46.008 ], 00:05:46.008 "driver_specific": { 00:05:46.008 "passthru": { 00:05:46.008 "name": "Passthru0", 00:05:46.008 "base_bdev_name": "Malloc0" 00:05:46.008 } 00:05:46.008 } 00:05:46.008 } 00:05:46.008 ]' 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.008 22:50:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.008 22:50:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 22:50:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.008 22:50:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.008 22:50:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.008 22:50:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.008 00:05:46.008 real 0m0.332s 00:05:46.008 user 0m0.198s 00:05:46.008 sys 0m0.061s 00:05:46.008 22:50:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.008 22:50:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 ************************************ 00:05:46.008 END TEST rpc_integrity 00:05:46.008 ************************************ 00:05:46.008 22:50:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.008 22:50:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.008 22:50:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.008 22:50:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.008 ************************************ 00:05:46.008 START TEST rpc_plugins 00:05:46.008 ************************************ 00:05:46.008 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:46.008 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.008 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.008 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.268 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.268 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.268 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.268 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.268 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.268 { 00:05:46.268 "name": "Malloc1", 00:05:46.268 "aliases": [ 00:05:46.268 "681c1d60-015d-41d3-b8e8-37e1e2fb1926" 00:05:46.268 ], 00:05:46.268 "product_name": "Malloc disk", 00:05:46.268 "block_size": 4096, 00:05:46.268 "num_blocks": 256, 00:05:46.268 "uuid": "681c1d60-015d-41d3-b8e8-37e1e2fb1926", 00:05:46.268 "assigned_rate_limits": { 00:05:46.268 "rw_ios_per_sec": 0, 00:05:46.268 "rw_mbytes_per_sec": 0, 00:05:46.268 "r_mbytes_per_sec": 0, 00:05:46.268 "w_mbytes_per_sec": 0 00:05:46.268 }, 00:05:46.268 "claimed": false, 00:05:46.268 "zoned": false, 00:05:46.268 "supported_io_types": { 00:05:46.268 "read": true, 00:05:46.268 "write": true, 00:05:46.268 "unmap": true, 00:05:46.268 "flush": true, 00:05:46.268 "reset": true, 00:05:46.268 "nvme_admin": false, 00:05:46.268 "nvme_io": false, 00:05:46.268 "nvme_io_md": false, 00:05:46.268 "write_zeroes": true, 00:05:46.268 "zcopy": true, 00:05:46.268 "get_zone_info": false, 00:05:46.268 "zone_management": false, 00:05:46.268 "zone_append": false, 00:05:46.268 "compare": false, 00:05:46.268 "compare_and_write": false, 00:05:46.268 "abort": true, 00:05:46.268 "seek_hole": false, 00:05:46.268 "seek_data": false, 00:05:46.268 "copy": true, 00:05:46.269 "nvme_iov_md": false 00:05:46.269 }, 00:05:46.269 "memory_domains": [ 00:05:46.269 { 00:05:46.269 "dma_device_id": "system", 00:05:46.269 "dma_device_type": 1 00:05:46.269 }, 00:05:46.269 { 00:05:46.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.269 "dma_device_type": 2 00:05:46.269 } 00:05:46.269 ], 00:05:46.269 "driver_specific": {} 00:05:46.269 } 00:05:46.269 ]' 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.269 22:50:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.269 00:05:46.269 real 0m0.174s 00:05:46.269 user 0m0.098s 00:05:46.269 sys 0m0.032s 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.269 22:50:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.269 ************************************ 00:05:46.269 END TEST rpc_plugins 00:05:46.269 ************************************ 00:05:46.269 22:50:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.269 22:50:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.269 22:50:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.269 22:50:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.269 ************************************ 00:05:46.269 START TEST rpc_trace_cmd_test 00:05:46.269 ************************************ 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.269 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70727", 00:05:46.269 "tpoint_group_mask": "0x8", 00:05:46.269 "iscsi_conn": { 00:05:46.269 "mask": "0x2", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "scsi": { 00:05:46.269 "mask": "0x4", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "bdev": { 00:05:46.269 "mask": "0x8", 00:05:46.269 "tpoint_mask": "0xffffffffffffffff" 00:05:46.269 }, 00:05:46.269 "nvmf_rdma": { 00:05:46.269 "mask": "0x10", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "nvmf_tcp": { 00:05:46.269 "mask": "0x20", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "ftl": { 00:05:46.269 "mask": "0x40", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "blobfs": { 00:05:46.269 "mask": "0x80", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "dsa": { 00:05:46.269 "mask": "0x200", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "thread": { 00:05:46.269 "mask": "0x400", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "nvme_pcie": { 00:05:46.269 "mask": "0x800", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "iaa": { 00:05:46.269 "mask": "0x1000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "nvme_tcp": { 00:05:46.269 "mask": "0x2000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "bdev_nvme": { 00:05:46.269 "mask": "0x4000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "sock": { 00:05:46.269 "mask": "0x8000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "blob": { 00:05:46.269 "mask": "0x10000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "bdev_raid": { 00:05:46.269 "mask": "0x20000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 }, 00:05:46.269 "scheduler": { 00:05:46.269 "mask": "0x40000", 00:05:46.269 "tpoint_mask": "0x0" 00:05:46.269 } 00:05:46.269 }' 00:05:46.269 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.529 00:05:46.529 real 0m0.268s 00:05:46.529 user 0m0.222s 00:05:46.529 sys 0m0.034s 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.529 22:50:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.529 ************************************ 00:05:46.529 END TEST rpc_trace_cmd_test 00:05:46.529 ************************************ 00:05:46.789 22:50:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.789 22:50:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.789 22:50:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.789 22:50:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.789 22:50:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.789 22:50:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 ************************************ 00:05:46.789 START TEST rpc_daemon_integrity 00:05:46.789 ************************************ 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.789 { 00:05:46.789 "name": "Malloc2", 00:05:46.789 "aliases": [ 00:05:46.789 "917ea52b-28a5-4b79-9e74-2fbf66720298" 00:05:46.789 ], 00:05:46.789 "product_name": "Malloc disk", 00:05:46.789 "block_size": 512, 00:05:46.789 "num_blocks": 16384, 00:05:46.789 "uuid": "917ea52b-28a5-4b79-9e74-2fbf66720298", 00:05:46.789 "assigned_rate_limits": { 00:05:46.789 "rw_ios_per_sec": 0, 00:05:46.789 "rw_mbytes_per_sec": 0, 00:05:46.789 "r_mbytes_per_sec": 0, 00:05:46.789 "w_mbytes_per_sec": 0 00:05:46.789 }, 00:05:46.789 "claimed": false, 00:05:46.789 "zoned": false, 00:05:46.789 "supported_io_types": { 00:05:46.789 "read": true, 00:05:46.789 "write": true, 00:05:46.789 "unmap": true, 00:05:46.789 "flush": true, 00:05:46.789 "reset": true, 00:05:46.789 "nvme_admin": false, 00:05:46.789 "nvme_io": false, 00:05:46.789 "nvme_io_md": false, 00:05:46.789 "write_zeroes": true, 00:05:46.789 "zcopy": true, 00:05:46.789 "get_zone_info": false, 00:05:46.789 "zone_management": false, 00:05:46.789 "zone_append": false, 00:05:46.789 "compare": false, 00:05:46.789 "compare_and_write": false, 00:05:46.789 "abort": true, 00:05:46.789 "seek_hole": false, 00:05:46.789 "seek_data": false, 00:05:46.789 "copy": true, 00:05:46.789 "nvme_iov_md": false 00:05:46.789 }, 00:05:46.789 "memory_domains": [ 00:05:46.789 { 00:05:46.789 "dma_device_id": "system", 00:05:46.789 "dma_device_type": 1 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.789 "dma_device_type": 2 00:05:46.789 } 00:05:46.789 ], 00:05:46.789 "driver_specific": {} 00:05:46.789 } 00:05:46.789 ]' 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 [2024-11-26 22:50:25.862552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.789 [2024-11-26 22:50:25.862606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.789 [2024-11-26 22:50:25.862631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:46.789 [2024-11-26 22:50:25.862642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.789 [2024-11-26 22:50:25.864736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.789 [2024-11-26 22:50:25.864778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.789 Passthru0 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.789 { 00:05:46.789 "name": "Malloc2", 00:05:46.789 "aliases": [ 00:05:46.789 "917ea52b-28a5-4b79-9e74-2fbf66720298" 00:05:46.789 ], 00:05:46.789 "product_name": "Malloc disk", 00:05:46.789 "block_size": 512, 00:05:46.789 "num_blocks": 16384, 00:05:46.789 "uuid": "917ea52b-28a5-4b79-9e74-2fbf66720298", 00:05:46.789 "assigned_rate_limits": { 00:05:46.789 "rw_ios_per_sec": 0, 00:05:46.789 "rw_mbytes_per_sec": 0, 00:05:46.789 "r_mbytes_per_sec": 0, 00:05:46.789 "w_mbytes_per_sec": 0 00:05:46.789 }, 00:05:46.789 "claimed": true, 00:05:46.789 "claim_type": "exclusive_write", 00:05:46.789 "zoned": false, 00:05:46.789 "supported_io_types": { 00:05:46.789 "read": true, 00:05:46.789 "write": true, 00:05:46.789 "unmap": true, 00:05:46.789 "flush": true, 00:05:46.789 "reset": true, 00:05:46.789 "nvme_admin": false, 00:05:46.789 "nvme_io": false, 00:05:46.789 "nvme_io_md": false, 00:05:46.789 "write_zeroes": true, 00:05:46.789 "zcopy": true, 00:05:46.789 "get_zone_info": false, 00:05:46.789 "zone_management": false, 00:05:46.789 "zone_append": false, 00:05:46.789 "compare": false, 00:05:46.789 "compare_and_write": false, 00:05:46.789 "abort": true, 00:05:46.789 "seek_hole": false, 00:05:46.789 "seek_data": false, 00:05:46.789 "copy": true, 00:05:46.789 "nvme_iov_md": false 00:05:46.789 }, 00:05:46.789 "memory_domains": [ 00:05:46.789 { 00:05:46.789 "dma_device_id": "system", 00:05:46.789 "dma_device_type": 1 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.789 "dma_device_type": 2 00:05:46.789 } 00:05:46.789 ], 00:05:46.789 "driver_specific": {} 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "name": "Passthru0", 00:05:46.789 "aliases": [ 00:05:46.789 "1b04b354-b518-5d94-88ce-91b01f1b2bf7" 00:05:46.789 ], 00:05:46.789 "product_name": "passthru", 00:05:46.789 "block_size": 512, 00:05:46.789 "num_blocks": 16384, 00:05:46.789 "uuid": "1b04b354-b518-5d94-88ce-91b01f1b2bf7", 00:05:46.789 "assigned_rate_limits": { 00:05:46.789 "rw_ios_per_sec": 0, 00:05:46.789 "rw_mbytes_per_sec": 0, 00:05:46.789 "r_mbytes_per_sec": 0, 00:05:46.789 "w_mbytes_per_sec": 0 00:05:46.789 }, 00:05:46.789 "claimed": false, 00:05:46.789 "zoned": false, 00:05:46.789 "supported_io_types": { 00:05:46.789 "read": true, 00:05:46.789 "write": true, 00:05:46.789 "unmap": true, 00:05:46.789 "flush": true, 00:05:46.789 "reset": true, 00:05:46.789 "nvme_admin": false, 00:05:46.789 "nvme_io": false, 00:05:46.789 "nvme_io_md": false, 00:05:46.789 "write_zeroes": true, 00:05:46.789 "zcopy": true, 00:05:46.789 "get_zone_info": false, 00:05:46.789 "zone_management": false, 00:05:46.789 "zone_append": false, 00:05:46.789 "compare": false, 00:05:46.789 "compare_and_write": false, 00:05:46.789 "abort": true, 00:05:46.789 "seek_hole": false, 00:05:46.789 "seek_data": false, 00:05:46.789 "copy": true, 00:05:46.789 "nvme_iov_md": false 00:05:46.789 }, 00:05:46.789 "memory_domains": [ 00:05:46.789 { 00:05:46.789 "dma_device_id": "system", 00:05:46.789 "dma_device_type": 1 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.789 "dma_device_type": 2 00:05:46.789 } 00:05:46.789 ], 00:05:46.789 "driver_specific": { 00:05:46.789 "passthru": { 00:05:46.789 "name": "Passthru0", 00:05:46.789 "base_bdev_name": "Malloc2" 00:05:46.789 } 00:05:46.789 } 00:05:46.789 } 00:05:46.789 ]' 00:05:46.789 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.050 22:50:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.050 22:50:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.050 00:05:47.050 real 0m0.309s 00:05:47.050 user 0m0.183s 00:05:47.050 sys 0m0.051s 00:05:47.050 22:50:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.050 22:50:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.050 ************************************ 00:05:47.050 END TEST rpc_daemon_integrity 00:05:47.050 ************************************ 00:05:47.050 22:50:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.050 22:50:26 rpc -- rpc/rpc.sh@84 -- # killprocess 70727 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 70727 ']' 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@958 -- # kill -0 70727 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70727 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.050 killing process with pid 70727 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70727' 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@973 -- # kill 70727 00:05:47.050 22:50:26 rpc -- common/autotest_common.sh@978 -- # wait 70727 00:05:47.621 00:05:47.621 real 0m2.935s 00:05:47.621 user 0m3.514s 00:05:47.621 sys 0m0.921s 00:05:47.621 22:50:26 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.621 22:50:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.621 ************************************ 00:05:47.621 END TEST rpc 00:05:47.621 ************************************ 00:05:47.621 22:50:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.621 22:50:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.621 22:50:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.621 22:50:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.621 ************************************ 00:05:47.621 START TEST skip_rpc 00:05:47.621 ************************************ 00:05:47.621 22:50:26 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.621 * Looking for test storage... 00:05:47.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.621 22:50:26 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.621 22:50:26 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.621 22:50:26 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.881 22:50:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.881 --rc genhtml_branch_coverage=1 00:05:47.881 --rc genhtml_function_coverage=1 00:05:47.881 --rc genhtml_legend=1 00:05:47.881 --rc geninfo_all_blocks=1 00:05:47.881 --rc geninfo_unexecuted_blocks=1 00:05:47.881 00:05:47.881 ' 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.881 --rc genhtml_branch_coverage=1 00:05:47.881 --rc genhtml_function_coverage=1 00:05:47.881 --rc genhtml_legend=1 00:05:47.881 --rc geninfo_all_blocks=1 00:05:47.881 --rc geninfo_unexecuted_blocks=1 00:05:47.881 00:05:47.881 ' 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.881 --rc genhtml_branch_coverage=1 00:05:47.881 --rc genhtml_function_coverage=1 00:05:47.881 --rc genhtml_legend=1 00:05:47.881 --rc geninfo_all_blocks=1 00:05:47.881 --rc geninfo_unexecuted_blocks=1 00:05:47.881 00:05:47.881 ' 00:05:47.881 22:50:26 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.881 --rc genhtml_branch_coverage=1 00:05:47.881 --rc genhtml_function_coverage=1 00:05:47.881 --rc genhtml_legend=1 00:05:47.881 --rc geninfo_all_blocks=1 00:05:47.881 --rc geninfo_unexecuted_blocks=1 00:05:47.881 00:05:47.882 ' 00:05:47.882 22:50:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.882 22:50:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.882 22:50:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.882 22:50:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.882 22:50:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.882 22:50:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.882 ************************************ 00:05:47.882 START TEST skip_rpc 00:05:47.882 ************************************ 00:05:47.882 22:50:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:47.882 22:50:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70934 00:05:47.882 22:50:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.882 22:50:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.882 22:50:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.882 [2024-11-26 22:50:26.905389] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:47.882 [2024-11-26 22:50:26.905512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70934 ] 00:05:48.142 [2024-11-26 22:50:27.041749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.142 [2024-11-26 22:50:27.081397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.142 [2024-11-26 22:50:27.115132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70934 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70934 ']' 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70934 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70934 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.429 killing process with pid 70934 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70934' 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70934 00:05:53.429 22:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70934 00:05:53.429 00:05:53.429 real 0m5.439s 00:05:53.429 user 0m5.005s 00:05:53.429 sys 0m0.368s 00:05:53.429 22:50:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.429 22:50:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 ************************************ 00:05:53.429 END TEST skip_rpc 00:05:53.429 ************************************ 00:05:53.429 22:50:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.429 22:50:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.429 22:50:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.429 22:50:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 ************************************ 00:05:53.429 START TEST skip_rpc_with_json 00:05:53.429 ************************************ 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71016 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71016 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71016 ']' 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.429 22:50:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 [2024-11-26 22:50:32.431410] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:53.429 [2024-11-26 22:50:32.431619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:05:53.706 [2024-11-26 22:50:32.573502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.706 [2024-11-26 22:50:32.613296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.706 [2024-11-26 22:50:32.638026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 [2024-11-26 22:50:33.241169] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.292 request: 00:05:54.292 { 00:05:54.292 "trtype": "tcp", 00:05:54.292 "method": "nvmf_get_transports", 00:05:54.292 "req_id": 1 00:05:54.292 } 00:05:54.292 Got JSON-RPC error response 00:05:54.292 response: 00:05:54.292 { 00:05:54.292 "code": -19, 00:05:54.292 "message": "No such device" 00:05:54.292 } 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 [2024-11-26 22:50:33.253278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.292 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.552 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.552 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.553 { 00:05:54.553 "subsystems": [ 00:05:54.553 { 00:05:54.553 "subsystem": "fsdev", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "fsdev_set_opts", 00:05:54.553 "params": { 00:05:54.553 "fsdev_io_pool_size": 65535, 00:05:54.553 "fsdev_io_cache_size": 256 00:05:54.553 } 00:05:54.553 } 00:05:54.553 ] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "keyring", 00:05:54.553 "config": [] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "iobuf", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "iobuf_set_options", 00:05:54.553 "params": { 00:05:54.553 "small_pool_count": 8192, 00:05:54.553 "large_pool_count": 1024, 00:05:54.553 "small_bufsize": 8192, 00:05:54.553 "large_bufsize": 135168, 00:05:54.553 "enable_numa": false 00:05:54.553 } 00:05:54.553 } 00:05:54.553 ] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "sock", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "sock_set_default_impl", 00:05:54.553 "params": { 00:05:54.553 "impl_name": "posix" 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "sock_impl_set_options", 00:05:54.553 "params": { 00:05:54.553 "impl_name": "ssl", 00:05:54.553 "recv_buf_size": 4096, 00:05:54.553 "send_buf_size": 4096, 00:05:54.553 "enable_recv_pipe": true, 00:05:54.553 "enable_quickack": false, 00:05:54.553 "enable_placement_id": 0, 00:05:54.553 "enable_zerocopy_send_server": true, 00:05:54.553 "enable_zerocopy_send_client": false, 00:05:54.553 "zerocopy_threshold": 0, 00:05:54.553 "tls_version": 0, 00:05:54.553 "enable_ktls": false 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "sock_impl_set_options", 00:05:54.553 "params": { 00:05:54.553 "impl_name": "posix", 00:05:54.553 "recv_buf_size": 2097152, 00:05:54.553 "send_buf_size": 2097152, 00:05:54.553 "enable_recv_pipe": true, 00:05:54.553 "enable_quickack": false, 00:05:54.553 "enable_placement_id": 0, 00:05:54.553 "enable_zerocopy_send_server": true, 00:05:54.553 "enable_zerocopy_send_client": false, 00:05:54.553 "zerocopy_threshold": 0, 00:05:54.553 "tls_version": 0, 00:05:54.553 "enable_ktls": false 00:05:54.553 } 00:05:54.553 } 00:05:54.553 ] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "vmd", 00:05:54.553 "config": [] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "accel", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "accel_set_options", 00:05:54.553 "params": { 00:05:54.553 "small_cache_size": 128, 00:05:54.553 "large_cache_size": 16, 00:05:54.553 "task_count": 2048, 00:05:54.553 "sequence_count": 2048, 00:05:54.553 "buf_count": 2048 00:05:54.553 } 00:05:54.553 } 00:05:54.553 ] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "bdev", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "bdev_set_options", 00:05:54.553 "params": { 00:05:54.553 "bdev_io_pool_size": 65535, 00:05:54.553 "bdev_io_cache_size": 256, 00:05:54.553 "bdev_auto_examine": true, 00:05:54.553 "iobuf_small_cache_size": 128, 00:05:54.553 "iobuf_large_cache_size": 16 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "bdev_raid_set_options", 00:05:54.553 "params": { 00:05:54.553 "process_window_size_kb": 1024, 00:05:54.553 "process_max_bandwidth_mb_sec": 0 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "bdev_iscsi_set_options", 00:05:54.553 "params": { 00:05:54.553 "timeout_sec": 30 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "bdev_nvme_set_options", 00:05:54.553 "params": { 00:05:54.553 "action_on_timeout": "none", 00:05:54.553 "timeout_us": 0, 00:05:54.553 "timeout_admin_us": 0, 00:05:54.553 "keep_alive_timeout_ms": 10000, 00:05:54.553 "arbitration_burst": 0, 00:05:54.553 "low_priority_weight": 0, 00:05:54.553 "medium_priority_weight": 0, 00:05:54.553 "high_priority_weight": 0, 00:05:54.553 "nvme_adminq_poll_period_us": 10000, 00:05:54.553 "nvme_ioq_poll_period_us": 0, 00:05:54.553 "io_queue_requests": 0, 00:05:54.553 "delay_cmd_submit": true, 00:05:54.553 "transport_retry_count": 4, 00:05:54.553 "bdev_retry_count": 3, 00:05:54.553 "transport_ack_timeout": 0, 00:05:54.553 "ctrlr_loss_timeout_sec": 0, 00:05:54.553 "reconnect_delay_sec": 0, 00:05:54.553 "fast_io_fail_timeout_sec": 0, 00:05:54.553 "disable_auto_failback": false, 00:05:54.553 "generate_uuids": false, 00:05:54.553 "transport_tos": 0, 00:05:54.553 "nvme_error_stat": false, 00:05:54.553 "rdma_srq_size": 0, 00:05:54.553 "io_path_stat": false, 00:05:54.553 "allow_accel_sequence": false, 00:05:54.553 "rdma_max_cq_size": 0, 00:05:54.553 "rdma_cm_event_timeout_ms": 0, 00:05:54.553 "dhchap_digests": [ 00:05:54.553 "sha256", 00:05:54.553 "sha384", 00:05:54.553 "sha512" 00:05:54.553 ], 00:05:54.553 "dhchap_dhgroups": [ 00:05:54.553 "null", 00:05:54.553 "ffdhe2048", 00:05:54.553 "ffdhe3072", 00:05:54.553 "ffdhe4096", 00:05:54.553 "ffdhe6144", 00:05:54.553 "ffdhe8192" 00:05:54.553 ] 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "bdev_nvme_set_hotplug", 00:05:54.553 "params": { 00:05:54.553 "period_us": 100000, 00:05:54.553 "enable": false 00:05:54.553 } 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "method": "bdev_wait_for_examine" 00:05:54.553 } 00:05:54.553 ] 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "scsi", 00:05:54.553 "config": null 00:05:54.553 }, 00:05:54.553 { 00:05:54.553 "subsystem": "scheduler", 00:05:54.553 "config": [ 00:05:54.553 { 00:05:54.553 "method": "framework_set_scheduler", 00:05:54.553 "params": { 00:05:54.553 "name": "static" 00:05:54.554 } 00:05:54.554 } 00:05:54.554 ] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "vhost_scsi", 00:05:54.554 "config": [] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "vhost_blk", 00:05:54.554 "config": [] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "ublk", 00:05:54.554 "config": [] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "nbd", 00:05:54.554 "config": [] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "nvmf", 00:05:54.554 "config": [ 00:05:54.554 { 00:05:54.554 "method": "nvmf_set_config", 00:05:54.554 "params": { 00:05:54.554 "discovery_filter": "match_any", 00:05:54.554 "admin_cmd_passthru": { 00:05:54.554 "identify_ctrlr": false 00:05:54.554 }, 00:05:54.554 "dhchap_digests": [ 00:05:54.554 "sha256", 00:05:54.554 "sha384", 00:05:54.554 "sha512" 00:05:54.554 ], 00:05:54.554 "dhchap_dhgroups": [ 00:05:54.554 "null", 00:05:54.554 "ffdhe2048", 00:05:54.554 "ffdhe3072", 00:05:54.554 "ffdhe4096", 00:05:54.554 "ffdhe6144", 00:05:54.554 "ffdhe8192" 00:05:54.554 ] 00:05:54.554 } 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "method": "nvmf_set_max_subsystems", 00:05:54.554 "params": { 00:05:54.554 "max_subsystems": 1024 00:05:54.554 } 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "method": "nvmf_set_crdt", 00:05:54.554 "params": { 00:05:54.554 "crdt1": 0, 00:05:54.554 "crdt2": 0, 00:05:54.554 "crdt3": 0 00:05:54.554 } 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "method": "nvmf_create_transport", 00:05:54.554 "params": { 00:05:54.554 "trtype": "TCP", 00:05:54.554 "max_queue_depth": 128, 00:05:54.554 "max_io_qpairs_per_ctrlr": 127, 00:05:54.554 "in_capsule_data_size": 4096, 00:05:54.554 "max_io_size": 131072, 00:05:54.554 "io_unit_size": 131072, 00:05:54.554 "max_aq_depth": 128, 00:05:54.554 "num_shared_buffers": 511, 00:05:54.554 "buf_cache_size": 4294967295, 00:05:54.554 "dif_insert_or_strip": false, 00:05:54.554 "zcopy": false, 00:05:54.554 "c2h_success": true, 00:05:54.554 "sock_priority": 0, 00:05:54.554 "abort_timeout_sec": 1, 00:05:54.554 "ack_timeout": 0, 00:05:54.554 "data_wr_pool_size": 0 00:05:54.554 } 00:05:54.554 } 00:05:54.554 ] 00:05:54.554 }, 00:05:54.554 { 00:05:54.554 "subsystem": "iscsi", 00:05:54.554 "config": [ 00:05:54.554 { 00:05:54.554 "method": "iscsi_set_options", 00:05:54.554 "params": { 00:05:54.554 "node_base": "iqn.2016-06.io.spdk", 00:05:54.554 "max_sessions": 128, 00:05:54.554 "max_connections_per_session": 2, 00:05:54.554 "max_queue_depth": 64, 00:05:54.554 "default_time2wait": 2, 00:05:54.554 "default_time2retain": 20, 00:05:54.554 "first_burst_length": 8192, 00:05:54.554 "immediate_data": true, 00:05:54.554 "allow_duplicated_isid": false, 00:05:54.554 "error_recovery_level": 0, 00:05:54.554 "nop_timeout": 60, 00:05:54.554 "nop_in_interval": 30, 00:05:54.554 "disable_chap": false, 00:05:54.554 "require_chap": false, 00:05:54.554 "mutual_chap": false, 00:05:54.554 "chap_group": 0, 00:05:54.554 "max_large_datain_per_connection": 64, 00:05:54.554 "max_r2t_per_connection": 4, 00:05:54.554 "pdu_pool_size": 36864, 00:05:54.554 "immediate_data_pool_size": 16384, 00:05:54.554 "data_out_pool_size": 2048 00:05:54.554 } 00:05:54.554 } 00:05:54.554 ] 00:05:54.554 } 00:05:54.554 ] 00:05:54.554 } 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71016 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71016 ']' 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71016 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71016 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.554 killing process with pid 71016 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71016' 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71016 00:05:54.554 22:50:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71016 00:05:54.814 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71044 00:05:54.814 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.814 22:50:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71044 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71044 ']' 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71044 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71044 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.113 killing process with pid 71044 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71044' 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71044 00:06:00.113 22:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71044 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.373 00:06:00.373 real 0m6.976s 00:06:00.373 user 0m6.493s 00:06:00.373 sys 0m0.786s 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.373 ************************************ 00:06:00.373 END TEST skip_rpc_with_json 00:06:00.373 ************************************ 00:06:00.373 22:50:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.373 22:50:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.373 22:50:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.373 22:50:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.373 ************************************ 00:06:00.373 START TEST skip_rpc_with_delay 00:06:00.373 ************************************ 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.373 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.373 [2024-11-26 22:50:39.479677] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.633 00:06:00.633 real 0m0.187s 00:06:00.633 user 0m0.088s 00:06:00.633 sys 0m0.097s 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.633 22:50:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.633 ************************************ 00:06:00.633 END TEST skip_rpc_with_delay 00:06:00.633 ************************************ 00:06:00.633 22:50:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.633 22:50:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.633 22:50:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.633 22:50:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.633 22:50:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.633 22:50:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.633 ************************************ 00:06:00.633 START TEST exit_on_failed_rpc_init 00:06:00.633 ************************************ 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71156 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71156 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71156 ']' 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.633 22:50:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.633 [2024-11-26 22:50:39.737931] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:00.633 [2024-11-26 22:50:39.738064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:06:00.893 [2024-11-26 22:50:39.878804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.893 [2024-11-26 22:50:39.920440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.893 [2024-11-26 22:50:39.945150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.462 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.722 [2024-11-26 22:50:40.653754] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:01.722 [2024-11-26 22:50:40.653859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71174 ] 00:06:01.722 [2024-11-26 22:50:40.789420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.722 [2024-11-26 22:50:40.829019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.982 [2024-11-26 22:50:40.870867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.982 [2024-11-26 22:50:40.870978] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.982 [2024-11-26 22:50:40.870992] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.982 [2024-11-26 22:50:40.871014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71156 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71156 ']' 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71156 00:06:01.982 22:50:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71156 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.982 killing process with pid 71156 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71156' 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71156 00:06:01.982 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71156 00:06:02.552 00:06:02.552 real 0m1.764s 00:06:02.552 user 0m1.892s 00:06:02.552 sys 0m0.544s 00:06:02.552 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.552 22:50:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.552 ************************************ 00:06:02.552 END TEST exit_on_failed_rpc_init 00:06:02.552 ************************************ 00:06:02.552 22:50:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.552 00:06:02.552 real 0m14.896s 00:06:02.552 user 0m13.703s 00:06:02.552 sys 0m2.119s 00:06:02.552 22:50:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.552 22:50:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.552 ************************************ 00:06:02.552 END TEST skip_rpc 00:06:02.552 ************************************ 00:06:02.552 22:50:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.552 22:50:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.552 22:50:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.552 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.552 ************************************ 00:06:02.552 START TEST rpc_client 00:06:02.552 ************************************ 00:06:02.552 22:50:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.552 * Looking for test storage... 00:06:02.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.552 22:50:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.552 22:50:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.552 22:50:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.813 22:50:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.813 --rc genhtml_branch_coverage=1 00:06:02.813 --rc genhtml_function_coverage=1 00:06:02.813 --rc genhtml_legend=1 00:06:02.813 --rc geninfo_all_blocks=1 00:06:02.813 --rc geninfo_unexecuted_blocks=1 00:06:02.813 00:06:02.813 ' 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.813 --rc genhtml_branch_coverage=1 00:06:02.813 --rc genhtml_function_coverage=1 00:06:02.813 --rc genhtml_legend=1 00:06:02.813 --rc geninfo_all_blocks=1 00:06:02.813 --rc geninfo_unexecuted_blocks=1 00:06:02.813 00:06:02.813 ' 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.813 --rc genhtml_branch_coverage=1 00:06:02.813 --rc genhtml_function_coverage=1 00:06:02.813 --rc genhtml_legend=1 00:06:02.813 --rc geninfo_all_blocks=1 00:06:02.813 --rc geninfo_unexecuted_blocks=1 00:06:02.813 00:06:02.813 ' 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.813 --rc genhtml_branch_coverage=1 00:06:02.813 --rc genhtml_function_coverage=1 00:06:02.813 --rc genhtml_legend=1 00:06:02.813 --rc geninfo_all_blocks=1 00:06:02.813 --rc geninfo_unexecuted_blocks=1 00:06:02.813 00:06:02.813 ' 00:06:02.813 22:50:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.813 OK 00:06:02.813 22:50:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.813 00:06:02.813 real 0m0.304s 00:06:02.813 user 0m0.173s 00:06:02.813 sys 0m0.150s 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.813 22:50:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.813 ************************************ 00:06:02.813 END TEST rpc_client 00:06:02.813 ************************************ 00:06:02.813 22:50:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.813 22:50:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.813 22:50:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.813 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.813 ************************************ 00:06:02.813 START TEST json_config 00:06:02.813 ************************************ 00:06:02.813 22:50:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.074 22:50:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.074 22:50:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.074 22:50:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.074 22:50:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.074 22:50:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.074 22:50:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.074 22:50:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.074 22:50:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.074 22:50:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.074 22:50:42 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.074 22:50:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.074 22:50:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.074 22:50:42 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.074 22:50:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.074 22:50:42 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.074 22:50:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.074 22:50:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.074 22:50:42 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.074 --rc genhtml_branch_coverage=1 00:06:03.074 --rc genhtml_function_coverage=1 00:06:03.074 --rc genhtml_legend=1 00:06:03.074 --rc geninfo_all_blocks=1 00:06:03.074 --rc geninfo_unexecuted_blocks=1 00:06:03.074 00:06:03.074 ' 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.074 --rc genhtml_branch_coverage=1 00:06:03.074 --rc genhtml_function_coverage=1 00:06:03.074 --rc genhtml_legend=1 00:06:03.074 --rc geninfo_all_blocks=1 00:06:03.074 --rc geninfo_unexecuted_blocks=1 00:06:03.074 00:06:03.074 ' 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.074 --rc genhtml_branch_coverage=1 00:06:03.074 --rc genhtml_function_coverage=1 00:06:03.074 --rc genhtml_legend=1 00:06:03.074 --rc geninfo_all_blocks=1 00:06:03.074 --rc geninfo_unexecuted_blocks=1 00:06:03.074 00:06:03.074 ' 00:06:03.074 22:50:42 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.074 --rc genhtml_branch_coverage=1 00:06:03.074 --rc genhtml_function_coverage=1 00:06:03.074 --rc genhtml_legend=1 00:06:03.074 --rc geninfo_all_blocks=1 00:06:03.074 --rc geninfo_unexecuted_blocks=1 00:06:03.074 00:06:03.074 ' 00:06:03.074 22:50:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.074 22:50:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.074 22:50:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.074 22:50:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.074 22:50:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.074 22:50:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.074 22:50:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.074 22:50:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.074 22:50:42 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.074 22:50:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.074 22:50:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.074 22:50:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.074 22:50:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.074 22:50:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.075 22:50:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.075 22:50:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.075 WARNING: No tests are enabled so not running JSON configuration tests 00:06:03.075 22:50:42 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:03.075 22:50:42 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:03.075 00:06:03.075 real 0m0.233s 00:06:03.075 user 0m0.125s 00:06:03.075 sys 0m0.116s 00:06:03.075 22:50:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.075 22:50:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.075 ************************************ 00:06:03.075 END TEST json_config 00:06:03.075 ************************************ 00:06:03.075 22:50:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.075 22:50:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.075 22:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.075 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.075 ************************************ 00:06:03.075 START TEST json_config_extra_key 00:06:03.075 ************************************ 00:06:03.075 22:50:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.335 22:50:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.335 22:50:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.336 --rc genhtml_branch_coverage=1 00:06:03.336 --rc genhtml_function_coverage=1 00:06:03.336 --rc genhtml_legend=1 00:06:03.336 --rc geninfo_all_blocks=1 00:06:03.336 --rc geninfo_unexecuted_blocks=1 00:06:03.336 00:06:03.336 ' 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.336 --rc genhtml_branch_coverage=1 00:06:03.336 --rc genhtml_function_coverage=1 00:06:03.336 --rc genhtml_legend=1 00:06:03.336 --rc geninfo_all_blocks=1 00:06:03.336 --rc geninfo_unexecuted_blocks=1 00:06:03.336 00:06:03.336 ' 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.336 --rc genhtml_branch_coverage=1 00:06:03.336 --rc genhtml_function_coverage=1 00:06:03.336 --rc genhtml_legend=1 00:06:03.336 --rc geninfo_all_blocks=1 00:06:03.336 --rc geninfo_unexecuted_blocks=1 00:06:03.336 00:06:03.336 ' 00:06:03.336 22:50:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.336 --rc genhtml_branch_coverage=1 00:06:03.336 --rc genhtml_function_coverage=1 00:06:03.336 --rc genhtml_legend=1 00:06:03.336 --rc geninfo_all_blocks=1 00:06:03.336 --rc geninfo_unexecuted_blocks=1 00:06:03.336 00:06:03.336 ' 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=33d7bbe7-3e79-448f-a318-ad3eabe1cd49 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.336 22:50:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.336 22:50:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.336 22:50:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.336 22:50:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.336 22:50:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:03.336 22:50:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.336 22:50:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.336 INFO: launching applications... 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.336 22:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.336 22:50:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.337 22:50:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71362 00:06:03.337 Waiting for target to run... 00:06:03.337 22:50:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.337 22:50:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71362 /var/tmp/spdk_tgt.sock 00:06:03.337 22:50:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71362 ']' 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.337 22:50:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.596 [2024-11-26 22:50:42.504059] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:03.596 [2024-11-26 22:50:42.504231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71362 ] 00:06:04.166 [2024-11-26 22:50:43.037537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.166 [2024-11-26 22:50:43.074646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.166 [2024-11-26 22:50:43.097904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.425 22:50:43 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.425 22:50:43 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:04.425 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:04.425 INFO: shutting down applications... 00:06:04.425 22:50:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:04.425 22:50:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71362 ]] 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71362 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71362 00:06:04.425 22:50:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.685 22:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71362 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.686 SPDK target shutdown done 00:06:04.686 22:50:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.686 Success 00:06:04.686 22:50:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.686 00:06:04.686 real 0m1.629s 00:06:04.686 user 0m1.116s 00:06:04.686 sys 0m0.672s 00:06:04.686 22:50:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.686 22:50:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.686 ************************************ 00:06:04.686 END TEST json_config_extra_key 00:06:04.686 ************************************ 00:06:04.945 22:50:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.945 22:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.945 22:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.945 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.945 ************************************ 00:06:04.945 START TEST alias_rpc 00:06:04.945 ************************************ 00:06:04.945 22:50:43 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.945 * Looking for test storage... 00:06:04.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.945 22:50:44 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.945 22:50:44 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.945 22:50:44 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.205 22:50:44 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.205 22:50:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.205 22:50:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.205 22:50:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.206 22:50:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.206 --rc genhtml_branch_coverage=1 00:06:05.206 --rc genhtml_function_coverage=1 00:06:05.206 --rc genhtml_legend=1 00:06:05.206 --rc geninfo_all_blocks=1 00:06:05.206 --rc geninfo_unexecuted_blocks=1 00:06:05.206 00:06:05.206 ' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.206 --rc genhtml_branch_coverage=1 00:06:05.206 --rc genhtml_function_coverage=1 00:06:05.206 --rc genhtml_legend=1 00:06:05.206 --rc geninfo_all_blocks=1 00:06:05.206 --rc geninfo_unexecuted_blocks=1 00:06:05.206 00:06:05.206 ' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.206 --rc genhtml_branch_coverage=1 00:06:05.206 --rc genhtml_function_coverage=1 00:06:05.206 --rc genhtml_legend=1 00:06:05.206 --rc geninfo_all_blocks=1 00:06:05.206 --rc geninfo_unexecuted_blocks=1 00:06:05.206 00:06:05.206 ' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.206 --rc genhtml_branch_coverage=1 00:06:05.206 --rc genhtml_function_coverage=1 00:06:05.206 --rc genhtml_legend=1 00:06:05.206 --rc geninfo_all_blocks=1 00:06:05.206 --rc geninfo_unexecuted_blocks=1 00:06:05.206 00:06:05.206 ' 00:06:05.206 22:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.206 22:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71430 00:06:05.206 22:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.206 22:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71430 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71430 ']' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.206 22:50:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.206 [2024-11-26 22:50:44.213285] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:05.206 [2024-11-26 22:50:44.213451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71430 ] 00:06:05.466 [2024-11-26 22:50:44.355691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.466 [2024-11-26 22:50:44.395432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.466 [2024-11-26 22:50:44.420222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.036 22:50:45 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.036 22:50:45 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.036 22:50:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.296 22:50:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71430 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71430 ']' 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71430 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71430 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.296 killing process with pid 71430 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71430' 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 71430 00:06:06.296 22:50:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 71430 00:06:06.556 ************************************ 00:06:06.556 END TEST alias_rpc 00:06:06.556 ************************************ 00:06:06.556 00:06:06.556 real 0m1.744s 00:06:06.556 user 0m1.734s 00:06:06.556 sys 0m0.529s 00:06:06.556 22:50:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.556 22:50:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.556 22:50:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.556 22:50:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.556 22:50:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.556 22:50:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.556 22:50:45 -- common/autotest_common.sh@10 -- # set +x 00:06:06.816 ************************************ 00:06:06.817 START TEST spdkcli_tcp 00:06:06.817 ************************************ 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.817 * Looking for test storage... 00:06:06.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.817 22:50:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.817 --rc genhtml_branch_coverage=1 00:06:06.817 --rc genhtml_function_coverage=1 00:06:06.817 --rc genhtml_legend=1 00:06:06.817 --rc geninfo_all_blocks=1 00:06:06.817 --rc geninfo_unexecuted_blocks=1 00:06:06.817 00:06:06.817 ' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.817 --rc genhtml_branch_coverage=1 00:06:06.817 --rc genhtml_function_coverage=1 00:06:06.817 --rc genhtml_legend=1 00:06:06.817 --rc geninfo_all_blocks=1 00:06:06.817 --rc geninfo_unexecuted_blocks=1 00:06:06.817 00:06:06.817 ' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.817 --rc genhtml_branch_coverage=1 00:06:06.817 --rc genhtml_function_coverage=1 00:06:06.817 --rc genhtml_legend=1 00:06:06.817 --rc geninfo_all_blocks=1 00:06:06.817 --rc geninfo_unexecuted_blocks=1 00:06:06.817 00:06:06.817 ' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.817 --rc genhtml_branch_coverage=1 00:06:06.817 --rc genhtml_function_coverage=1 00:06:06.817 --rc genhtml_legend=1 00:06:06.817 --rc geninfo_all_blocks=1 00:06:06.817 --rc geninfo_unexecuted_blocks=1 00:06:06.817 00:06:06.817 ' 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71515 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.817 22:50:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71515 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71515 ']' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.817 22:50:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.077 [2024-11-26 22:50:46.031878] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:07.077 [2024-11-26 22:50:46.032045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71515 ] 00:06:07.077 [2024-11-26 22:50:46.174492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.337 [2024-11-26 22:50:46.213541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.337 [2024-11-26 22:50:46.239528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.337 [2024-11-26 22:50:46.239601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.907 22:50:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.907 22:50:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:07.907 22:50:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.907 22:50:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71532 00:06:07.907 22:50:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:08.167 [ 00:06:08.167 "bdev_malloc_delete", 00:06:08.167 "bdev_malloc_create", 00:06:08.168 "bdev_null_resize", 00:06:08.168 "bdev_null_delete", 00:06:08.168 "bdev_null_create", 00:06:08.168 "bdev_nvme_cuse_unregister", 00:06:08.168 "bdev_nvme_cuse_register", 00:06:08.168 "bdev_opal_new_user", 00:06:08.168 "bdev_opal_set_lock_state", 00:06:08.168 "bdev_opal_delete", 00:06:08.168 "bdev_opal_get_info", 00:06:08.168 "bdev_opal_create", 00:06:08.168 "bdev_nvme_opal_revert", 00:06:08.168 "bdev_nvme_opal_init", 00:06:08.168 "bdev_nvme_send_cmd", 00:06:08.168 "bdev_nvme_set_keys", 00:06:08.168 "bdev_nvme_get_path_iostat", 00:06:08.168 "bdev_nvme_get_mdns_discovery_info", 00:06:08.168 "bdev_nvme_stop_mdns_discovery", 00:06:08.168 "bdev_nvme_start_mdns_discovery", 00:06:08.168 "bdev_nvme_set_multipath_policy", 00:06:08.168 "bdev_nvme_set_preferred_path", 00:06:08.168 "bdev_nvme_get_io_paths", 00:06:08.168 "bdev_nvme_remove_error_injection", 00:06:08.168 "bdev_nvme_add_error_injection", 00:06:08.168 "bdev_nvme_get_discovery_info", 00:06:08.168 "bdev_nvme_stop_discovery", 00:06:08.168 "bdev_nvme_start_discovery", 00:06:08.168 "bdev_nvme_get_controller_health_info", 00:06:08.168 "bdev_nvme_disable_controller", 00:06:08.168 "bdev_nvme_enable_controller", 00:06:08.168 "bdev_nvme_reset_controller", 00:06:08.168 "bdev_nvme_get_transport_statistics", 00:06:08.168 "bdev_nvme_apply_firmware", 00:06:08.168 "bdev_nvme_detach_controller", 00:06:08.168 "bdev_nvme_get_controllers", 00:06:08.168 "bdev_nvme_attach_controller", 00:06:08.168 "bdev_nvme_set_hotplug", 00:06:08.168 "bdev_nvme_set_options", 00:06:08.168 "bdev_passthru_delete", 00:06:08.168 "bdev_passthru_create", 00:06:08.168 "bdev_lvol_set_parent_bdev", 00:06:08.168 "bdev_lvol_set_parent", 00:06:08.168 "bdev_lvol_check_shallow_copy", 00:06:08.168 "bdev_lvol_start_shallow_copy", 00:06:08.168 "bdev_lvol_grow_lvstore", 00:06:08.168 "bdev_lvol_get_lvols", 00:06:08.168 "bdev_lvol_get_lvstores", 00:06:08.168 "bdev_lvol_delete", 00:06:08.168 "bdev_lvol_set_read_only", 00:06:08.168 "bdev_lvol_resize", 00:06:08.168 "bdev_lvol_decouple_parent", 00:06:08.168 "bdev_lvol_inflate", 00:06:08.168 "bdev_lvol_rename", 00:06:08.168 "bdev_lvol_clone_bdev", 00:06:08.168 "bdev_lvol_clone", 00:06:08.168 "bdev_lvol_snapshot", 00:06:08.168 "bdev_lvol_create", 00:06:08.168 "bdev_lvol_delete_lvstore", 00:06:08.168 "bdev_lvol_rename_lvstore", 00:06:08.168 "bdev_lvol_create_lvstore", 00:06:08.168 "bdev_raid_set_options", 00:06:08.168 "bdev_raid_remove_base_bdev", 00:06:08.168 "bdev_raid_add_base_bdev", 00:06:08.168 "bdev_raid_delete", 00:06:08.168 "bdev_raid_create", 00:06:08.168 "bdev_raid_get_bdevs", 00:06:08.168 "bdev_error_inject_error", 00:06:08.168 "bdev_error_delete", 00:06:08.168 "bdev_error_create", 00:06:08.168 "bdev_split_delete", 00:06:08.168 "bdev_split_create", 00:06:08.168 "bdev_delay_delete", 00:06:08.168 "bdev_delay_create", 00:06:08.168 "bdev_delay_update_latency", 00:06:08.168 "bdev_zone_block_delete", 00:06:08.168 "bdev_zone_block_create", 00:06:08.168 "blobfs_create", 00:06:08.168 "blobfs_detect", 00:06:08.168 "blobfs_set_cache_size", 00:06:08.168 "bdev_aio_delete", 00:06:08.168 "bdev_aio_rescan", 00:06:08.168 "bdev_aio_create", 00:06:08.168 "bdev_ftl_set_property", 00:06:08.168 "bdev_ftl_get_properties", 00:06:08.168 "bdev_ftl_get_stats", 00:06:08.168 "bdev_ftl_unmap", 00:06:08.168 "bdev_ftl_unload", 00:06:08.168 "bdev_ftl_delete", 00:06:08.168 "bdev_ftl_load", 00:06:08.168 "bdev_ftl_create", 00:06:08.168 "bdev_virtio_attach_controller", 00:06:08.168 "bdev_virtio_scsi_get_devices", 00:06:08.168 "bdev_virtio_detach_controller", 00:06:08.168 "bdev_virtio_blk_set_hotplug", 00:06:08.168 "bdev_iscsi_delete", 00:06:08.168 "bdev_iscsi_create", 00:06:08.168 "bdev_iscsi_set_options", 00:06:08.168 "accel_error_inject_error", 00:06:08.168 "ioat_scan_accel_module", 00:06:08.168 "dsa_scan_accel_module", 00:06:08.168 "iaa_scan_accel_module", 00:06:08.168 "keyring_file_remove_key", 00:06:08.168 "keyring_file_add_key", 00:06:08.168 "keyring_linux_set_options", 00:06:08.168 "fsdev_aio_delete", 00:06:08.168 "fsdev_aio_create", 00:06:08.168 "iscsi_get_histogram", 00:06:08.168 "iscsi_enable_histogram", 00:06:08.168 "iscsi_set_options", 00:06:08.168 "iscsi_get_auth_groups", 00:06:08.168 "iscsi_auth_group_remove_secret", 00:06:08.168 "iscsi_auth_group_add_secret", 00:06:08.168 "iscsi_delete_auth_group", 00:06:08.168 "iscsi_create_auth_group", 00:06:08.168 "iscsi_set_discovery_auth", 00:06:08.168 "iscsi_get_options", 00:06:08.168 "iscsi_target_node_request_logout", 00:06:08.168 "iscsi_target_node_set_redirect", 00:06:08.168 "iscsi_target_node_set_auth", 00:06:08.168 "iscsi_target_node_add_lun", 00:06:08.168 "iscsi_get_stats", 00:06:08.168 "iscsi_get_connections", 00:06:08.168 "iscsi_portal_group_set_auth", 00:06:08.168 "iscsi_start_portal_group", 00:06:08.168 "iscsi_delete_portal_group", 00:06:08.168 "iscsi_create_portal_group", 00:06:08.168 "iscsi_get_portal_groups", 00:06:08.168 "iscsi_delete_target_node", 00:06:08.168 "iscsi_target_node_remove_pg_ig_maps", 00:06:08.168 "iscsi_target_node_add_pg_ig_maps", 00:06:08.168 "iscsi_create_target_node", 00:06:08.168 "iscsi_get_target_nodes", 00:06:08.168 "iscsi_delete_initiator_group", 00:06:08.168 "iscsi_initiator_group_remove_initiators", 00:06:08.168 "iscsi_initiator_group_add_initiators", 00:06:08.168 "iscsi_create_initiator_group", 00:06:08.168 "iscsi_get_initiator_groups", 00:06:08.168 "nvmf_set_crdt", 00:06:08.168 "nvmf_set_config", 00:06:08.168 "nvmf_set_max_subsystems", 00:06:08.168 "nvmf_stop_mdns_prr", 00:06:08.168 "nvmf_publish_mdns_prr", 00:06:08.168 "nvmf_subsystem_get_listeners", 00:06:08.168 "nvmf_subsystem_get_qpairs", 00:06:08.168 "nvmf_subsystem_get_controllers", 00:06:08.168 "nvmf_get_stats", 00:06:08.168 "nvmf_get_transports", 00:06:08.168 "nvmf_create_transport", 00:06:08.168 "nvmf_get_targets", 00:06:08.168 "nvmf_delete_target", 00:06:08.168 "nvmf_create_target", 00:06:08.168 "nvmf_subsystem_allow_any_host", 00:06:08.168 "nvmf_subsystem_set_keys", 00:06:08.168 "nvmf_subsystem_remove_host", 00:06:08.168 "nvmf_subsystem_add_host", 00:06:08.168 "nvmf_ns_remove_host", 00:06:08.168 "nvmf_ns_add_host", 00:06:08.168 "nvmf_subsystem_remove_ns", 00:06:08.168 "nvmf_subsystem_set_ns_ana_group", 00:06:08.168 "nvmf_subsystem_add_ns", 00:06:08.168 "nvmf_subsystem_listener_set_ana_state", 00:06:08.168 "nvmf_discovery_get_referrals", 00:06:08.168 "nvmf_discovery_remove_referral", 00:06:08.168 "nvmf_discovery_add_referral", 00:06:08.168 "nvmf_subsystem_remove_listener", 00:06:08.168 "nvmf_subsystem_add_listener", 00:06:08.168 "nvmf_delete_subsystem", 00:06:08.168 "nvmf_create_subsystem", 00:06:08.168 "nvmf_get_subsystems", 00:06:08.168 "env_dpdk_get_mem_stats", 00:06:08.168 "nbd_get_disks", 00:06:08.168 "nbd_stop_disk", 00:06:08.168 "nbd_start_disk", 00:06:08.168 "ublk_recover_disk", 00:06:08.168 "ublk_get_disks", 00:06:08.168 "ublk_stop_disk", 00:06:08.168 "ublk_start_disk", 00:06:08.168 "ublk_destroy_target", 00:06:08.168 "ublk_create_target", 00:06:08.168 "virtio_blk_create_transport", 00:06:08.168 "virtio_blk_get_transports", 00:06:08.168 "vhost_controller_set_coalescing", 00:06:08.168 "vhost_get_controllers", 00:06:08.168 "vhost_delete_controller", 00:06:08.168 "vhost_create_blk_controller", 00:06:08.168 "vhost_scsi_controller_remove_target", 00:06:08.169 "vhost_scsi_controller_add_target", 00:06:08.169 "vhost_start_scsi_controller", 00:06:08.169 "vhost_create_scsi_controller", 00:06:08.169 "thread_set_cpumask", 00:06:08.169 "scheduler_set_options", 00:06:08.169 "framework_get_governor", 00:06:08.169 "framework_get_scheduler", 00:06:08.169 "framework_set_scheduler", 00:06:08.169 "framework_get_reactors", 00:06:08.169 "thread_get_io_channels", 00:06:08.169 "thread_get_pollers", 00:06:08.169 "thread_get_stats", 00:06:08.169 "framework_monitor_context_switch", 00:06:08.169 "spdk_kill_instance", 00:06:08.169 "log_enable_timestamps", 00:06:08.169 "log_get_flags", 00:06:08.169 "log_clear_flag", 00:06:08.169 "log_set_flag", 00:06:08.169 "log_get_level", 00:06:08.169 "log_set_level", 00:06:08.169 "log_get_print_level", 00:06:08.169 "log_set_print_level", 00:06:08.169 "framework_enable_cpumask_locks", 00:06:08.169 "framework_disable_cpumask_locks", 00:06:08.169 "framework_wait_init", 00:06:08.169 "framework_start_init", 00:06:08.169 "scsi_get_devices", 00:06:08.169 "bdev_get_histogram", 00:06:08.169 "bdev_enable_histogram", 00:06:08.169 "bdev_set_qos_limit", 00:06:08.169 "bdev_set_qd_sampling_period", 00:06:08.169 "bdev_get_bdevs", 00:06:08.169 "bdev_reset_iostat", 00:06:08.169 "bdev_get_iostat", 00:06:08.169 "bdev_examine", 00:06:08.169 "bdev_wait_for_examine", 00:06:08.169 "bdev_set_options", 00:06:08.169 "accel_get_stats", 00:06:08.169 "accel_set_options", 00:06:08.169 "accel_set_driver", 00:06:08.169 "accel_crypto_key_destroy", 00:06:08.169 "accel_crypto_keys_get", 00:06:08.169 "accel_crypto_key_create", 00:06:08.169 "accel_assign_opc", 00:06:08.169 "accel_get_module_info", 00:06:08.169 "accel_get_opc_assignments", 00:06:08.169 "vmd_rescan", 00:06:08.169 "vmd_remove_device", 00:06:08.169 "vmd_enable", 00:06:08.169 "sock_get_default_impl", 00:06:08.169 "sock_set_default_impl", 00:06:08.169 "sock_impl_set_options", 00:06:08.169 "sock_impl_get_options", 00:06:08.169 "iobuf_get_stats", 00:06:08.169 "iobuf_set_options", 00:06:08.169 "keyring_get_keys", 00:06:08.169 "framework_get_pci_devices", 00:06:08.169 "framework_get_config", 00:06:08.169 "framework_get_subsystems", 00:06:08.169 "fsdev_set_opts", 00:06:08.169 "fsdev_get_opts", 00:06:08.169 "trace_get_info", 00:06:08.169 "trace_get_tpoint_group_mask", 00:06:08.169 "trace_disable_tpoint_group", 00:06:08.169 "trace_enable_tpoint_group", 00:06:08.169 "trace_clear_tpoint_mask", 00:06:08.169 "trace_set_tpoint_mask", 00:06:08.169 "notify_get_notifications", 00:06:08.169 "notify_get_types", 00:06:08.169 "spdk_get_version", 00:06:08.169 "rpc_get_methods" 00:06:08.169 ] 00:06:08.169 22:50:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.169 22:50:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:08.169 22:50:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71515 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71515 ']' 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71515 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71515 00:06:08.169 killing process with pid 71515 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71515' 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71515 00:06:08.169 22:50:47 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71515 00:06:08.432 00:06:08.432 real 0m1.841s 00:06:08.432 user 0m3.040s 00:06:08.432 sys 0m0.595s 00:06:08.432 22:50:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.432 22:50:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.432 ************************************ 00:06:08.432 END TEST spdkcli_tcp 00:06:08.432 ************************************ 00:06:08.692 22:50:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.692 22:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.692 22:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.693 22:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.693 ************************************ 00:06:08.693 START TEST dpdk_mem_utility 00:06:08.693 ************************************ 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.693 * Looking for test storage... 00:06:08.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.693 22:50:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.693 --rc genhtml_branch_coverage=1 00:06:08.693 --rc genhtml_function_coverage=1 00:06:08.693 --rc genhtml_legend=1 00:06:08.693 --rc geninfo_all_blocks=1 00:06:08.693 --rc geninfo_unexecuted_blocks=1 00:06:08.693 00:06:08.693 ' 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.693 --rc genhtml_branch_coverage=1 00:06:08.693 --rc genhtml_function_coverage=1 00:06:08.693 --rc genhtml_legend=1 00:06:08.693 --rc geninfo_all_blocks=1 00:06:08.693 --rc geninfo_unexecuted_blocks=1 00:06:08.693 00:06:08.693 ' 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.693 --rc genhtml_branch_coverage=1 00:06:08.693 --rc genhtml_function_coverage=1 00:06:08.693 --rc genhtml_legend=1 00:06:08.693 --rc geninfo_all_blocks=1 00:06:08.693 --rc geninfo_unexecuted_blocks=1 00:06:08.693 00:06:08.693 ' 00:06:08.693 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.693 --rc genhtml_branch_coverage=1 00:06:08.693 --rc genhtml_function_coverage=1 00:06:08.693 --rc genhtml_legend=1 00:06:08.693 --rc geninfo_all_blocks=1 00:06:08.693 --rc geninfo_unexecuted_blocks=1 00:06:08.693 00:06:08.693 ' 00:06:08.693 22:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.953 22:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71615 00:06:08.953 22:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.953 22:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71615 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71615 ']' 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.953 22:50:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 [2024-11-26 22:50:47.915102] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:08.953 [2024-11-26 22:50:47.915214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:06:08.953 [2024-11-26 22:50:48.048477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.213 [2024-11-26 22:50:48.086153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.213 [2024-11-26 22:50:48.110770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.784 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.784 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:09.784 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.784 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.784 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.785 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 { 00:06:09.785 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.785 } 00:06:09.785 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.785 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.785 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:09.785 1 heaps totaling size 818.000000 MiB 00:06:09.785 size: 818.000000 MiB heap id: 0 00:06:09.785 end heaps---------- 00:06:09.785 9 mempools totaling size 603.782043 MiB 00:06:09.785 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.785 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.785 size: 100.555481 MiB name: bdev_io_71615 00:06:09.785 size: 50.003479 MiB name: msgpool_71615 00:06:09.785 size: 36.509338 MiB name: fsdev_io_71615 00:06:09.785 size: 21.763794 MiB name: PDU_Pool 00:06:09.785 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.785 size: 4.133484 MiB name: evtpool_71615 00:06:09.785 size: 0.026123 MiB name: Session_Pool 00:06:09.785 end mempools------- 00:06:09.785 6 memzones totaling size 4.142822 MiB 00:06:09.785 size: 1.000366 MiB name: RG_ring_0_71615 00:06:09.785 size: 1.000366 MiB name: RG_ring_1_71615 00:06:09.785 size: 1.000366 MiB name: RG_ring_4_71615 00:06:09.785 size: 1.000366 MiB name: RG_ring_5_71615 00:06:09.785 size: 0.125366 MiB name: RG_ring_2_71615 00:06:09.785 size: 0.015991 MiB name: RG_ring_3_71615 00:06:09.785 end memzones------- 00:06:09.785 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.785 heap id: 0 total size: 818.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:09.785 list of free elements. size: 10.944336 MiB 00:06:09.785 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:09.785 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:09.785 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:09.785 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:09.785 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:09.785 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:09.785 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:09.785 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:09.785 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:06:09.785 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:09.785 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:09.785 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:09.785 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:09.785 element at address: 0x200028200000 with size: 0.396301 MiB 00:06:09.785 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:09.785 list of standard malloc elements. size: 199.126770 MiB 00:06:09.785 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:09.785 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:09.785 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:09.785 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:09.785 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:09.785 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:09.785 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.785 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:09.785 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:09.785 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:09.785 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:09.786 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:09.786 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:09.787 element at address: 0x200028265740 with size: 0.000183 MiB 00:06:09.787 element at address: 0x200028265800 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c400 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:09.787 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:09.787 list of memzone associated elements. size: 607.928894 MiB 00:06:09.787 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:09.787 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.787 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:09.787 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.787 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:09.787 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71615_0 00:06:09.787 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:09.787 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71615_0 00:06:09.787 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:09.787 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71615_0 00:06:09.787 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:09.787 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.787 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:09.788 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.788 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:09.788 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71615_0 00:06:09.788 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:09.788 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71615 00:06:09.788 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:09.788 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71615 00:06:09.788 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:09.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.788 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:09.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.788 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:09.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.788 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:09.788 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.788 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:09.788 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71615 00:06:09.788 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:09.788 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71615 00:06:09.788 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:09.788 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71615 00:06:09.788 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:09.788 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71615 00:06:09.788 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:09.788 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71615 00:06:09.788 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:09.788 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71615 00:06:09.788 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:09.788 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.788 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:09.788 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.788 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:09.788 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.788 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:09.788 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71615 00:06:09.788 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:09.788 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71615 00:06:09.788 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:09.788 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.788 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:06:09.788 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.788 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:09.788 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71615 00:06:09.788 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:06:09.788 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.788 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:09.788 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71615 00:06:09.788 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:09.788 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71615 00:06:09.788 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:09.788 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71615 00:06:09.788 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:06:09.788 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.788 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.788 22:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71615 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71615 ']' 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71615 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71615 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.788 killing process with pid 71615 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71615' 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71615 00:06:09.788 22:50:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71615 00:06:10.358 00:06:10.358 real 0m1.639s 00:06:10.358 user 0m1.551s 00:06:10.358 sys 0m0.530s 00:06:10.358 22:50:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.358 22:50:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.358 ************************************ 00:06:10.358 END TEST dpdk_mem_utility 00:06:10.358 ************************************ 00:06:10.358 22:50:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.358 22:50:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.358 22:50:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.358 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:06:10.358 ************************************ 00:06:10.358 START TEST event 00:06:10.358 ************************************ 00:06:10.358 22:50:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.358 * Looking for test storage... 00:06:10.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.358 22:50:49 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.358 22:50:49 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.358 22:50:49 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.617 22:50:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.617 22:50:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.617 22:50:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.617 22:50:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.617 22:50:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.617 22:50:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.617 22:50:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.617 22:50:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.617 22:50:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.617 22:50:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.617 22:50:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.617 22:50:49 event -- scripts/common.sh@344 -- # case "$op" in 00:06:10.617 22:50:49 event -- scripts/common.sh@345 -- # : 1 00:06:10.617 22:50:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.617 22:50:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.617 22:50:49 event -- scripts/common.sh@365 -- # decimal 1 00:06:10.617 22:50:49 event -- scripts/common.sh@353 -- # local d=1 00:06:10.617 22:50:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.617 22:50:49 event -- scripts/common.sh@355 -- # echo 1 00:06:10.617 22:50:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.617 22:50:49 event -- scripts/common.sh@366 -- # decimal 2 00:06:10.617 22:50:49 event -- scripts/common.sh@353 -- # local d=2 00:06:10.617 22:50:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.617 22:50:49 event -- scripts/common.sh@355 -- # echo 2 00:06:10.617 22:50:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.617 22:50:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.617 22:50:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.617 22:50:49 event -- scripts/common.sh@368 -- # return 0 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.617 --rc genhtml_branch_coverage=1 00:06:10.617 --rc genhtml_function_coverage=1 00:06:10.617 --rc genhtml_legend=1 00:06:10.617 --rc geninfo_all_blocks=1 00:06:10.617 --rc geninfo_unexecuted_blocks=1 00:06:10.617 00:06:10.617 ' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.617 --rc genhtml_branch_coverage=1 00:06:10.617 --rc genhtml_function_coverage=1 00:06:10.617 --rc genhtml_legend=1 00:06:10.617 --rc geninfo_all_blocks=1 00:06:10.617 --rc geninfo_unexecuted_blocks=1 00:06:10.617 00:06:10.617 ' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.617 --rc genhtml_branch_coverage=1 00:06:10.617 --rc genhtml_function_coverage=1 00:06:10.617 --rc genhtml_legend=1 00:06:10.617 --rc geninfo_all_blocks=1 00:06:10.617 --rc geninfo_unexecuted_blocks=1 00:06:10.617 00:06:10.617 ' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.617 --rc genhtml_branch_coverage=1 00:06:10.617 --rc genhtml_function_coverage=1 00:06:10.617 --rc genhtml_legend=1 00:06:10.617 --rc geninfo_all_blocks=1 00:06:10.617 --rc geninfo_unexecuted_blocks=1 00:06:10.617 00:06:10.617 ' 00:06:10.617 22:50:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.617 22:50:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.617 22:50:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:10.617 22:50:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.617 22:50:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.617 ************************************ 00:06:10.617 START TEST event_perf 00:06:10.617 ************************************ 00:06:10.617 22:50:49 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.617 Running I/O for 1 seconds...[2024-11-26 22:50:49.589021] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:10.617 [2024-11-26 22:50:49.589164] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71701 ] 00:06:10.617 [2024-11-26 22:50:49.726879] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.877 [2024-11-26 22:50:49.764143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.877 [2024-11-26 22:50:49.791496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.877 [2024-11-26 22:50:49.791713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.877 [2024-11-26 22:50:49.792234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.877 [2024-11-26 22:50:49.792333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.816 Running I/O for 1 seconds... 00:06:11.816 lcore 0: 80604 00:06:11.816 lcore 1: 80593 00:06:11.816 lcore 2: 80596 00:06:11.816 lcore 3: 80600 00:06:11.816 done. 00:06:11.816 00:06:11.816 real 0m1.323s 00:06:11.816 user 0m4.090s 00:06:11.816 sys 0m0.119s 00:06:11.816 22:50:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.817 22:50:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 ************************************ 00:06:11.817 END TEST event_perf 00:06:11.817 ************************************ 00:06:11.817 22:50:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.817 22:50:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:11.817 22:50:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.817 22:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.817 ************************************ 00:06:11.817 START TEST event_reactor 00:06:11.817 ************************************ 00:06:11.817 22:50:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:12.077 [2024-11-26 22:50:50.981658] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:12.077 [2024-11-26 22:50:50.981801] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71735 ] 00:06:12.077 [2024-11-26 22:50:51.118042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.077 [2024-11-26 22:50:51.157392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.077 [2024-11-26 22:50:51.181272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.457 test_start 00:06:13.457 oneshot 00:06:13.457 tick 100 00:06:13.457 tick 100 00:06:13.457 tick 250 00:06:13.457 tick 100 00:06:13.457 tick 100 00:06:13.457 tick 100 00:06:13.457 tick 250 00:06:13.457 tick 500 00:06:13.457 tick 100 00:06:13.457 tick 100 00:06:13.457 tick 250 00:06:13.457 tick 100 00:06:13.457 tick 100 00:06:13.457 test_end 00:06:13.457 00:06:13.457 real 0m1.315s 00:06:13.457 user 0m1.112s 00:06:13.457 sys 0m0.096s 00:06:13.457 22:50:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.457 22:50:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 END TEST event_reactor 00:06:13.457 ************************************ 00:06:13.457 22:50:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.457 22:50:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:13.457 22:50:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.457 22:50:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 START TEST event_reactor_perf 00:06:13.457 ************************************ 00:06:13.457 22:50:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.457 [2024-11-26 22:50:52.378059] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:13.457 [2024-11-26 22:50:52.378245] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:06:13.457 [2024-11-26 22:50:52.520419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.457 [2024-11-26 22:50:52.560750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.717 [2024-11-26 22:50:52.586599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.656 test_start 00:06:14.656 test_end 00:06:14.656 Performance: 407825 events per second 00:06:14.656 00:06:14.656 real 0m1.328s 00:06:14.656 user 0m1.113s 00:06:14.656 sys 0m0.109s 00:06:14.656 22:50:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.656 22:50:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.656 ************************************ 00:06:14.656 END TEST event_reactor_perf 00:06:14.656 ************************************ 00:06:14.656 22:50:53 event -- event/event.sh@49 -- # uname -s 00:06:14.656 22:50:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.656 22:50:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.656 22:50:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.656 22:50:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.656 22:50:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.656 ************************************ 00:06:14.656 START TEST event_scheduler 00:06:14.656 ************************************ 00:06:14.656 22:50:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.916 * Looking for test storage... 00:06:14.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.916 22:50:53 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.916 22:50:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.916 22:50:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.916 22:50:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.916 22:50:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.916 22:50:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.917 22:50:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.917 --rc genhtml_branch_coverage=1 00:06:14.917 --rc genhtml_function_coverage=1 00:06:14.917 --rc genhtml_legend=1 00:06:14.917 --rc geninfo_all_blocks=1 00:06:14.917 --rc geninfo_unexecuted_blocks=1 00:06:14.917 00:06:14.917 ' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.917 --rc genhtml_branch_coverage=1 00:06:14.917 --rc genhtml_function_coverage=1 00:06:14.917 --rc genhtml_legend=1 00:06:14.917 --rc geninfo_all_blocks=1 00:06:14.917 --rc geninfo_unexecuted_blocks=1 00:06:14.917 00:06:14.917 ' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.917 --rc genhtml_branch_coverage=1 00:06:14.917 --rc genhtml_function_coverage=1 00:06:14.917 --rc genhtml_legend=1 00:06:14.917 --rc geninfo_all_blocks=1 00:06:14.917 --rc geninfo_unexecuted_blocks=1 00:06:14.917 00:06:14.917 ' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.917 --rc genhtml_branch_coverage=1 00:06:14.917 --rc genhtml_function_coverage=1 00:06:14.917 --rc genhtml_legend=1 00:06:14.917 --rc geninfo_all_blocks=1 00:06:14.917 --rc geninfo_unexecuted_blocks=1 00:06:14.917 00:06:14.917 ' 00:06:14.917 22:50:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.917 22:50:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71842 00:06:14.917 22:50:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.917 22:50:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.917 22:50:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71842 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71842 ']' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.917 22:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.917 [2024-11-26 22:50:54.038005] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:14.917 [2024-11-26 22:50:54.038130] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71842 ] 00:06:15.177 [2024-11-26 22:50:54.174740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.177 [2024-11-26 22:50:54.212557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.177 [2024-11-26 22:50:54.260160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.177 [2024-11-26 22:50:54.260488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.177 [2024-11-26 22:50:54.260452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.177 [2024-11-26 22:50:54.260618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.747 22:50:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.747 22:50:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:15.747 22:50:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.747 22:50:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.747 22:50:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.747 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:15.747 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.747 POWER: intel_pstate driver is not supported 00:06:15.747 POWER: cppc_cpufreq driver is not supported 00:06:15.747 POWER: amd-pstate driver is not supported 00:06:15.747 POWER: acpi-cpufreq driver is not supported 00:06:15.747 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.747 [2024-11-26 22:50:54.870208] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.747 [2024-11-26 22:50:54.870285] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:15.747 [2024-11-26 22:50:54.870304] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.747 [2024-11-26 22:50:54.870384] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.747 [2024-11-26 22:50:54.870415] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.747 [2024-11-26 22:50:54.870424] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:16.007 22:50:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:16.007 22:50:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.007 22:50:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 [2024-11-26 22:50:54.999498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:16.007 22:50:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:16.007 22:50:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.007 22:50:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.007 22:50:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 ************************************ 00:06:16.007 START TEST scheduler_create_thread 00:06:16.007 ************************************ 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 2 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 3 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 4 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 5 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:16.007 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.008 6 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.008 7 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.008 8 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.008 9 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.008 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.577 10 00:06:16.577 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.577 22:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:16.577 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.577 22:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.953 22:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.953 22:50:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:17.953 22:50:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:17.953 22:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.953 22:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.891 22:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.891 22:50:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.891 22:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.891 22:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.460 22:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.460 22:50:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.460 22:50:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.460 22:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.460 22:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.399 22:50:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.399 00:06:20.399 real 0m4.221s 00:06:20.399 user 0m0.029s 00:06:20.399 sys 0m0.009s 00:06:20.399 22:50:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.399 22:50:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.399 ************************************ 00:06:20.399 END TEST scheduler_create_thread 00:06:20.399 ************************************ 00:06:20.399 22:50:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.399 22:50:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71842 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71842 ']' 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71842 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71842 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71842' 00:06:20.399 killing process with pid 71842 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71842 00:06:20.399 22:50:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71842 00:06:20.399 [2024-11-26 22:50:59.514419] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.969 00:06:20.969 real 0m6.173s 00:06:20.969 user 0m13.215s 00:06:20.969 sys 0m0.594s 00:06:20.969 22:50:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.969 22:50:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.969 ************************************ 00:06:20.969 END TEST event_scheduler 00:06:20.969 ************************************ 00:06:20.969 22:50:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.969 22:50:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.969 22:50:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.969 22:50:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.969 22:50:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.969 ************************************ 00:06:20.969 START TEST app_repeat 00:06:20.969 ************************************ 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71954 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.969 Process app_repeat pid: 71954 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71954' 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.969 spdk_app_start Round 0 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.969 22:50:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71954 /var/tmp/spdk-nbd.sock 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.969 22:50:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.969 [2024-11-26 22:51:00.049092] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:20.969 [2024-11-26 22:51:00.049241] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71954 ] 00:06:21.229 [2024-11-26 22:51:00.190373] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.229 [2024-11-26 22:51:00.230489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.229 [2024-11-26 22:51:00.256688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.229 [2024-11-26 22:51:00.256768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.170 22:51:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.170 22:51:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:22.170 22:51:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.170 Malloc0 00:06:22.170 22:51:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.444 Malloc1 00:06:22.444 22:51:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.444 22:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.726 /dev/nbd0 00:06:22.726 22:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.726 22:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.726 1+0 records in 00:06:22.726 1+0 records out 00:06:22.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478004 s, 8.6 MB/s 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.726 22:51:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.726 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.726 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.726 22:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.726 /dev/nbd1 00:06:22.985 22:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.985 22:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.985 22:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.986 1+0 records in 00:06:22.986 1+0 records out 00:06:22.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287762 s, 14.2 MB/s 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.986 22:51:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.986 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.986 22:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.986 22:51:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.986 22:51:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.986 22:51:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.986 22:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.986 { 00:06:22.986 "nbd_device": "/dev/nbd0", 00:06:22.986 "bdev_name": "Malloc0" 00:06:22.986 }, 00:06:22.986 { 00:06:22.986 "nbd_device": "/dev/nbd1", 00:06:22.986 "bdev_name": "Malloc1" 00:06:22.986 } 00:06:22.986 ]' 00:06:22.986 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.986 { 00:06:22.986 "nbd_device": "/dev/nbd0", 00:06:22.986 "bdev_name": "Malloc0" 00:06:22.986 }, 00:06:22.986 { 00:06:22.986 "nbd_device": "/dev/nbd1", 00:06:22.986 "bdev_name": "Malloc1" 00:06:22.986 } 00:06:22.986 ]' 00:06:22.986 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.245 /dev/nbd1' 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.245 /dev/nbd1' 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.245 256+0 records in 00:06:23.245 256+0 records out 00:06:23.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139878 s, 75.0 MB/s 00:06:23.245 22:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.246 256+0 records in 00:06:23.246 256+0 records out 00:06:23.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254998 s, 41.1 MB/s 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.246 256+0 records in 00:06:23.246 256+0 records out 00:06:23.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266068 s, 39.4 MB/s 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.246 22:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.506 22:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.766 22:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.026 22:51:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.026 22:51:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.285 22:51:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.285 [2024-11-26 22:51:03.329909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.285 [2024-11-26 22:51:03.353300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.285 [2024-11-26 22:51:03.353305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.285 [2024-11-26 22:51:03.394578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.285 [2024-11-26 22:51:03.394665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.575 spdk_app_start Round 1 00:06:27.575 22:51:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.575 22:51:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:27.575 22:51:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71954 /var/tmp/spdk-nbd.sock 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.575 22:51:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.575 22:51:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.575 Malloc0 00:06:27.575 22:51:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.834 Malloc1 00:06:27.834 22:51:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.834 22:51:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.094 /dev/nbd0 00:06:28.094 22:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.094 22:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.094 22:51:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.095 1+0 records in 00:06:28.095 1+0 records out 00:06:28.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349471 s, 11.7 MB/s 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.095 22:51:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.095 22:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.095 22:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.095 22:51:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.354 /dev/nbd1 00:06:28.354 22:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.354 22:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.354 22:51:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.355 1+0 records in 00:06:28.355 1+0 records out 00:06:28.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272336 s, 15.0 MB/s 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.355 22:51:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.355 22:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.355 22:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.355 22:51:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.355 22:51:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.355 22:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.614 { 00:06:28.614 "nbd_device": "/dev/nbd0", 00:06:28.614 "bdev_name": "Malloc0" 00:06:28.614 }, 00:06:28.614 { 00:06:28.614 "nbd_device": "/dev/nbd1", 00:06:28.614 "bdev_name": "Malloc1" 00:06:28.614 } 00:06:28.614 ]' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.614 { 00:06:28.614 "nbd_device": "/dev/nbd0", 00:06:28.614 "bdev_name": "Malloc0" 00:06:28.614 }, 00:06:28.614 { 00:06:28.614 "nbd_device": "/dev/nbd1", 00:06:28.614 "bdev_name": "Malloc1" 00:06:28.614 } 00:06:28.614 ]' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.614 /dev/nbd1' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.614 /dev/nbd1' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.614 256+0 records in 00:06:28.614 256+0 records out 00:06:28.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130972 s, 80.1 MB/s 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.614 256+0 records in 00:06:28.614 256+0 records out 00:06:28.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245688 s, 42.7 MB/s 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.614 256+0 records in 00:06:28.614 256+0 records out 00:06:28.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260952 s, 40.2 MB/s 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.614 22:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.615 22:51:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.874 22:51:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.135 22:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.394 22:51:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.394 22:51:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.653 22:51:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.653 [2024-11-26 22:51:08.720911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.653 [2024-11-26 22:51:08.744914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.653 [2024-11-26 22:51:08.744920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.912 [2024-11-26 22:51:08.786461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.912 [2024-11-26 22:51:08.786529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.197 spdk_app_start Round 2 00:06:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.197 22:51:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.197 22:51:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.197 22:51:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71954 /var/tmp/spdk-nbd.sock 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.197 22:51:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.197 22:51:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.197 Malloc0 00:06:33.197 22:51:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.197 Malloc1 00:06:33.197 22:51:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.197 22:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.457 /dev/nbd0 00:06:33.457 22:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.457 22:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.457 1+0 records in 00:06:33.457 1+0 records out 00:06:33.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332436 s, 12.3 MB/s 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.457 22:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:33.457 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.457 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.457 22:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.715 /dev/nbd1 00:06:33.715 22:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.715 22:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.715 22:51:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:33.715 22:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:33.715 22:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.715 22:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.715 22:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.716 1+0 records in 00:06:33.716 1+0 records out 00:06:33.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352901 s, 11.6 MB/s 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.716 22:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:33.716 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.716 22:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.716 22:51:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.716 22:51:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.716 22:51:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.974 22:51:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.974 { 00:06:33.974 "nbd_device": "/dev/nbd0", 00:06:33.974 "bdev_name": "Malloc0" 00:06:33.974 }, 00:06:33.974 { 00:06:33.974 "nbd_device": "/dev/nbd1", 00:06:33.974 "bdev_name": "Malloc1" 00:06:33.974 } 00:06:33.974 ]' 00:06:33.974 22:51:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.974 22:51:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.974 { 00:06:33.974 "nbd_device": "/dev/nbd0", 00:06:33.974 "bdev_name": "Malloc0" 00:06:33.974 }, 00:06:33.974 { 00:06:33.974 "nbd_device": "/dev/nbd1", 00:06:33.974 "bdev_name": "Malloc1" 00:06:33.974 } 00:06:33.974 ]' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.974 /dev/nbd1' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.974 /dev/nbd1' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144808 s, 72.4 MB/s 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261178 s, 40.1 MB/s 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259429 s, 40.4 MB/s 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.974 22:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.233 22:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.492 22:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.751 22:51:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.751 22:51:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.010 22:51:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.270 [2024-11-26 22:51:14.154608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.270 [2024-11-26 22:51:14.180840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.270 [2024-11-26 22:51:14.180843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.270 [2024-11-26 22:51:14.223899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.270 [2024-11-26 22:51:14.223950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.614 22:51:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71954 /var/tmp/spdk-nbd.sock 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:38.614 22:51:17 event.app_repeat -- event/event.sh@39 -- # killprocess 71954 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71954 ']' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71954 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71954 00:06:38.614 killing process with pid 71954 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71954' 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71954 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71954 00:06:38.614 spdk_app_start is called in Round 0. 00:06:38.614 Shutdown signal received, stop current app iteration 00:06:38.614 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:38.614 spdk_app_start is called in Round 1. 00:06:38.614 Shutdown signal received, stop current app iteration 00:06:38.614 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:38.614 spdk_app_start is called in Round 2. 00:06:38.614 Shutdown signal received, stop current app iteration 00:06:38.614 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:38.614 spdk_app_start is called in Round 3. 00:06:38.614 Shutdown signal received, stop current app iteration 00:06:38.614 22:51:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.614 22:51:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.614 00:06:38.614 real 0m17.466s 00:06:38.614 user 0m38.790s 00:06:38.614 sys 0m2.483s 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.614 22:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.614 ************************************ 00:06:38.614 END TEST app_repeat 00:06:38.614 ************************************ 00:06:38.614 22:51:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.614 22:51:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.614 22:51:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.614 22:51:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.614 22:51:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.614 ************************************ 00:06:38.614 START TEST cpu_locks 00:06:38.614 ************************************ 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.614 * Looking for test storage... 00:06:38.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.614 22:51:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 22:51:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.614 22:51:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.614 22:51:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.614 22:51:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.614 22:51:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.615 22:51:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.615 22:51:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.615 ************************************ 00:06:38.615 START TEST default_locks 00:06:38.615 ************************************ 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72381 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72381 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72381 ']' 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.615 22:51:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.875 [2024-11-26 22:51:17.835717] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:38.875 [2024-11-26 22:51:17.835862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72381 ] 00:06:38.875 [2024-11-26 22:51:17.970771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.134 [2024-11-26 22:51:18.011007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.135 [2024-11-26 22:51:18.036544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.704 22:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.704 22:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:39.704 22:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72381 00:06:39.704 22:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72381 00:06:39.704 22:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72381 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72381 ']' 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72381 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72381 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.964 killing process with pid 72381 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72381' 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72381 00:06:39.964 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72381 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72381 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72381 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72381 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72381 ']' 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ERROR: process (pid: 72381) is no longer running 00:06:40.544 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72381) - No such process 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.544 00:06:40.544 real 0m1.669s 00:06:40.544 user 0m1.613s 00:06:40.544 sys 0m0.585s 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.544 22:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ************************************ 00:06:40.544 END TEST default_locks 00:06:40.544 ************************************ 00:06:40.545 22:51:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.545 22:51:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.545 22:51:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.545 22:51:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.545 ************************************ 00:06:40.545 START TEST default_locks_via_rpc 00:06:40.545 ************************************ 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72434 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72434 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72434 ']' 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.545 22:51:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.545 [2024-11-26 22:51:19.568107] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:40.545 [2024-11-26 22:51:19.568229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72434 ] 00:06:40.805 [2024-11-26 22:51:19.707284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.805 [2024-11-26 22:51:19.745405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.805 [2024-11-26 22:51:19.771225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72434 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72434 00:06:41.374 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72434 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72434 ']' 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72434 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72434 00:06:41.635 killing process with pid 72434 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72434' 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72434 00:06:41.635 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72434 00:06:41.895 00:06:41.895 real 0m1.494s 00:06:41.895 user 0m1.436s 00:06:41.895 sys 0m0.517s 00:06:41.895 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.895 ************************************ 00:06:41.895 END TEST default_locks_via_rpc 00:06:41.895 ************************************ 00:06:41.895 22:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.895 22:51:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.895 22:51:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.895 22:51:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.895 22:51:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.155 ************************************ 00:06:42.155 START TEST non_locking_app_on_locked_coremask 00:06:42.155 ************************************ 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72476 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72476 /var/tmp/spdk.sock 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72476 ']' 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.155 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.155 [2024-11-26 22:51:21.131493] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:42.155 [2024-11-26 22:51:21.131624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72476 ] 00:06:42.155 [2024-11-26 22:51:21.272994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.415 [2024-11-26 22:51:21.312208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.415 [2024-11-26 22:51:21.340659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72492 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72492 /var/tmp/spdk2.sock 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72492 ']' 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.985 22:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.985 [2024-11-26 22:51:22.010585] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:42.985 [2024-11-26 22:51:22.010711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72492 ] 00:06:43.245 [2024-11-26 22:51:22.149311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.245 [2024-11-26 22:51:22.182336] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.245 [2024-11-26 22:51:22.182384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.245 [2024-11-26 22:51:22.238558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.815 22:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.815 22:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.815 22:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72476 00:06:43.815 22:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72476 00:06:43.816 22:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.075 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72476 00:06:44.075 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72476 ']' 00:06:44.075 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72476 00:06:44.075 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72476 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.076 killing process with pid 72476 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72476' 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72476 00:06:44.076 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72476 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72492 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72492 ']' 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72492 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72492 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.015 killing process with pid 72492 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72492' 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72492 00:06:45.015 22:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72492 00:06:45.278 00:06:45.278 real 0m3.257s 00:06:45.278 user 0m3.393s 00:06:45.278 sys 0m1.009s 00:06:45.278 22:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.278 22:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.278 ************************************ 00:06:45.278 END TEST non_locking_app_on_locked_coremask 00:06:45.278 ************************************ 00:06:45.278 22:51:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.278 22:51:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.278 22:51:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.278 22:51:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.278 ************************************ 00:06:45.278 START TEST locking_app_on_unlocked_coremask 00:06:45.278 ************************************ 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72555 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72555 /var/tmp/spdk.sock 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72555 ']' 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.278 22:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.538 [2024-11-26 22:51:24.452383] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:45.538 [2024-11-26 22:51:24.453017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72555 ] 00:06:45.538 [2024-11-26 22:51:24.593601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.538 [2024-11-26 22:51:24.631169] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.538 [2024-11-26 22:51:24.631205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.538 [2024-11-26 22:51:24.655965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72566 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72566 /var/tmp/spdk2.sock 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72566 ']' 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.478 22:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.478 [2024-11-26 22:51:25.363440] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:46.478 [2024-11-26 22:51:25.363591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72566 ] 00:06:46.478 [2024-11-26 22:51:25.505317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.478 [2024-11-26 22:51:25.538518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.478 [2024-11-26 22:51:25.587478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.048 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.048 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.048 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72566 00:06:47.048 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72566 00:06:47.048 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72555 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72555 ']' 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72555 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72555 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.618 killing process with pid 72555 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72555' 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72555 00:06:47.618 22:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72555 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72566 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72566 ']' 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72566 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72566 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.205 killing process with pid 72566 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72566' 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72566 00:06:48.205 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72566 00:06:48.775 00:06:48.775 real 0m3.332s 00:06:48.775 user 0m3.452s 00:06:48.775 sys 0m1.076s 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.775 ************************************ 00:06:48.775 END TEST locking_app_on_unlocked_coremask 00:06:48.775 ************************************ 00:06:48.775 22:51:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.775 22:51:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.775 22:51:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.775 22:51:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.775 ************************************ 00:06:48.775 START TEST locking_app_on_locked_coremask 00:06:48.775 ************************************ 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72635 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72635 /var/tmp/spdk.sock 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72635 ']' 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.775 22:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.775 [2024-11-26 22:51:27.851351] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:48.775 [2024-11-26 22:51:27.851524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72635 ] 00:06:49.035 [2024-11-26 22:51:27.991975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.035 [2024-11-26 22:51:28.027104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.035 [2024-11-26 22:51:28.052849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72651 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72651 /var/tmp/spdk2.sock 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72651 /var/tmp/spdk2.sock 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72651 /var/tmp/spdk2.sock 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72651 ']' 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.604 22:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.873 [2024-11-26 22:51:28.753623] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:49.873 [2024-11-26 22:51:28.753774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72651 ] 00:06:49.873 [2024-11-26 22:51:28.893230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.873 [2024-11-26 22:51:28.925516] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72635 has claimed it. 00:06:49.873 [2024-11-26 22:51:28.925569] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.473 ERROR: process (pid: 72651) is no longer running 00:06:50.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72651) - No such process 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72635 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72635 00:06:50.473 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72635 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72635 ']' 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72635 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72635 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72635' 00:06:50.733 killing process with pid 72635 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72635 00:06:50.733 22:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72635 00:06:50.993 00:06:50.993 real 0m2.327s 00:06:50.993 user 0m2.514s 00:06:50.993 sys 0m0.696s 00:06:50.993 22:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.993 22:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.993 ************************************ 00:06:50.993 END TEST locking_app_on_locked_coremask 00:06:50.993 ************************************ 00:06:51.252 22:51:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.252 22:51:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.252 22:51:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.252 22:51:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.252 ************************************ 00:06:51.252 START TEST locking_overlapped_coremask 00:06:51.252 ************************************ 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72693 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72693 /var/tmp/spdk.sock 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72693 ']' 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.252 22:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.252 [2024-11-26 22:51:30.248210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:51.252 [2024-11-26 22:51:30.248378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72693 ] 00:06:51.512 [2024-11-26 22:51:30.389703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.512 [2024-11-26 22:51:30.427293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.512 [2024-11-26 22:51:30.454420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.512 [2024-11-26 22:51:30.454522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.512 [2024-11-26 22:51:30.454625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72711 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72711 /var/tmp/spdk2.sock 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72711 /var/tmp/spdk2.sock 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72711 /var/tmp/spdk2.sock 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72711 ']' 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.082 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 [2024-11-26 22:51:31.145392] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:52.082 [2024-11-26 22:51:31.145531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72711 ] 00:06:52.343 [2024-11-26 22:51:31.286705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.343 [2024-11-26 22:51:31.318617] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72693 has claimed it. 00:06:52.343 [2024-11-26 22:51:31.318662] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.912 ERROR: process (pid: 72711) is no longer running 00:06:52.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72711) - No such process 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72693 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72693 ']' 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72693 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72693 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.912 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.912 killing process with pid 72693 00:06:52.913 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72693' 00:06:52.913 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72693 00:06:52.913 22:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72693 00:06:53.173 00:06:53.173 real 0m2.048s 00:06:53.173 user 0m5.406s 00:06:53.173 sys 0m0.564s 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.173 ************************************ 00:06:53.173 END TEST locking_overlapped_coremask 00:06:53.173 ************************************ 00:06:53.173 22:51:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:53.173 22:51:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.173 22:51:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.173 22:51:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.173 ************************************ 00:06:53.173 START TEST locking_overlapped_coremask_via_rpc 00:06:53.173 ************************************ 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72753 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72753 /var/tmp/spdk.sock 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72753 ']' 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.173 22:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.434 [2024-11-26 22:51:32.367608] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:53.434 [2024-11-26 22:51:32.367766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72753 ] 00:06:53.434 [2024-11-26 22:51:32.509961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.434 [2024-11-26 22:51:32.547448] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.434 [2024-11-26 22:51:32.547510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.693 [2024-11-26 22:51:32.576431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.693 [2024-11-26 22:51:32.576536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.693 [2024-11-26 22:51:32.576651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72771 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72771 /var/tmp/spdk2.sock 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:54.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72771 ']' 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.263 22:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.263 [2024-11-26 22:51:33.257958] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:54.263 [2024-11-26 22:51:33.258634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:06:54.523 [2024-11-26 22:51:33.399815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.523 [2024-11-26 22:51:33.431804] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.523 [2024-11-26 22:51:33.431845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.523 [2024-11-26 22:51:33.532069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.523 [2024-11-26 22:51:33.535480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.523 [2024-11-26 22:51:33.535602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 [2024-11-26 22:51:34.251456] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72753 has claimed it. 00:06:55.462 request: 00:06:55.462 { 00:06:55.462 "method": "framework_enable_cpumask_locks", 00:06:55.462 "req_id": 1 00:06:55.462 } 00:06:55.462 Got JSON-RPC error response 00:06:55.462 response: 00:06:55.462 { 00:06:55.462 "code": -32603, 00:06:55.462 "message": "Failed to claim CPU core: 2" 00:06:55.462 } 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72753 /var/tmp/spdk.sock 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72753 ']' 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72771 /var/tmp/spdk2.sock 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72771 ']' 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.462 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.723 00:06:55.723 real 0m2.433s 00:06:55.723 user 0m1.057s 00:06:55.723 sys 0m0.160s 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.723 22:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 ************************************ 00:06:55.723 END TEST locking_overlapped_coremask_via_rpc 00:06:55.723 ************************************ 00:06:55.723 22:51:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:55.723 22:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72753 ]] 00:06:55.723 22:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72753 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72753 ']' 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72753 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72753 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72753' 00:06:55.723 killing process with pid 72753 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72753 00:06:55.723 22:51:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72753 00:06:56.292 22:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72771 ]] 00:06:56.292 22:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72771 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72771 ']' 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72771 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72771 00:06:56.292 killing process with pid 72771 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72771' 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72771 00:06:56.292 22:51:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72771 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.861 Process with pid 72753 is not found 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72753 ]] 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72753 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72753 ']' 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72753 00:06:56.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72753) - No such process 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72753 is not found' 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72771 ]] 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72771 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72771 ']' 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72771 00:06:56.861 Process with pid 72771 is not found 00:06:56.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72771) - No such process 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72771 is not found' 00:06:56.861 22:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.861 00:06:56.861 real 0m18.356s 00:06:56.861 user 0m31.359s 00:06:56.861 sys 0m5.927s 00:06:56.861 ************************************ 00:06:56.861 END TEST cpu_locks 00:06:56.861 ************************************ 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.861 22:51:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.861 ************************************ 00:06:56.861 END TEST event 00:06:56.861 ************************************ 00:06:56.861 00:06:56.861 real 0m46.628s 00:06:56.861 user 1m29.944s 00:06:56.861 sys 0m9.742s 00:06:56.861 22:51:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.861 22:51:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.122 22:51:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.122 22:51:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.122 22:51:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.122 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:57.122 ************************************ 00:06:57.122 START TEST thread 00:06:57.122 ************************************ 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.122 * Looking for test storage... 00:06:57.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.122 22:51:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.122 22:51:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.122 22:51:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.122 22:51:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.122 22:51:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.122 22:51:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.122 22:51:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.122 22:51:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.122 22:51:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.122 22:51:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.122 22:51:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.122 22:51:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.122 22:51:36 thread -- scripts/common.sh@345 -- # : 1 00:06:57.122 22:51:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.122 22:51:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.122 22:51:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.122 22:51:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.122 22:51:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.122 22:51:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.122 22:51:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.122 22:51:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.122 22:51:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.122 22:51:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.122 22:51:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.122 22:51:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.122 22:51:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.122 22:51:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.122 22:51:36 thread -- scripts/common.sh@368 -- # return 0 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.122 --rc genhtml_branch_coverage=1 00:06:57.122 --rc genhtml_function_coverage=1 00:06:57.122 --rc genhtml_legend=1 00:06:57.122 --rc geninfo_all_blocks=1 00:06:57.122 --rc geninfo_unexecuted_blocks=1 00:06:57.122 00:06:57.122 ' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.122 --rc genhtml_branch_coverage=1 00:06:57.122 --rc genhtml_function_coverage=1 00:06:57.122 --rc genhtml_legend=1 00:06:57.122 --rc geninfo_all_blocks=1 00:06:57.122 --rc geninfo_unexecuted_blocks=1 00:06:57.122 00:06:57.122 ' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.122 --rc genhtml_branch_coverage=1 00:06:57.122 --rc genhtml_function_coverage=1 00:06:57.122 --rc genhtml_legend=1 00:06:57.122 --rc geninfo_all_blocks=1 00:06:57.122 --rc geninfo_unexecuted_blocks=1 00:06:57.122 00:06:57.122 ' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.122 --rc genhtml_branch_coverage=1 00:06:57.122 --rc genhtml_function_coverage=1 00:06:57.122 --rc genhtml_legend=1 00:06:57.122 --rc geninfo_all_blocks=1 00:06:57.122 --rc geninfo_unexecuted_blocks=1 00:06:57.122 00:06:57.122 ' 00:06:57.122 22:51:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.122 22:51:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.122 ************************************ 00:06:57.122 START TEST thread_poller_perf 00:06:57.122 ************************************ 00:06:57.122 22:51:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.383 [2024-11-26 22:51:36.293125] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:57.383 [2024-11-26 22:51:36.293319] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72911 ] 00:06:57.383 [2024-11-26 22:51:36.429638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.383 [2024-11-26 22:51:36.468223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.383 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.383 [2024-11-26 22:51:36.493202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.762 [2024-11-26T22:51:37.890Z] ====================================== 00:06:58.762 [2024-11-26T22:51:37.890Z] busy:2302961390 (cyc) 00:06:58.762 [2024-11-26T22:51:37.890Z] total_run_count: 413000 00:06:58.762 [2024-11-26T22:51:37.890Z] tsc_hz: 2294600000 (cyc) 00:06:58.762 [2024-11-26T22:51:37.890Z] ====================================== 00:06:58.762 [2024-11-26T22:51:37.890Z] poller_cost: 5576 (cyc), 2430 (nsec) 00:06:58.762 00:06:58.762 real 0m1.324s 00:06:58.762 user 0m1.122s 00:06:58.762 sys 0m0.096s 00:06:58.762 ************************************ 00:06:58.762 END TEST thread_poller_perf 00:06:58.762 ************************************ 00:06:58.762 22:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.762 22:51:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 22:51:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.762 22:51:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.762 22:51:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.762 22:51:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 ************************************ 00:06:58.762 START TEST thread_poller_perf 00:06:58.762 ************************************ 00:06:58.762 22:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.762 [2024-11-26 22:51:37.687840] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:58.762 [2024-11-26 22:51:37.688023] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72942 ] 00:06:58.762 [2024-11-26 22:51:37.820441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.762 [2024-11-26 22:51:37.856056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.762 [2024-11-26 22:51:37.884187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.762 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.142 [2024-11-26T22:51:39.270Z] ====================================== 00:07:00.142 [2024-11-26T22:51:39.270Z] busy:2298116550 (cyc) 00:07:00.142 [2024-11-26T22:51:39.270Z] total_run_count: 5286000 00:07:00.142 [2024-11-26T22:51:39.270Z] tsc_hz: 2294600000 (cyc) 00:07:00.142 [2024-11-26T22:51:39.270Z] ====================================== 00:07:00.142 [2024-11-26T22:51:39.270Z] poller_cost: 434 (cyc), 189 (nsec) 00:07:00.142 00:07:00.142 real 0m1.313s 00:07:00.142 user 0m1.114s 00:07:00.142 sys 0m0.094s 00:07:00.142 22:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.142 ************************************ 00:07:00.142 END TEST thread_poller_perf 00:07:00.142 ************************************ 00:07:00.142 22:51:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.142 22:51:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.142 ************************************ 00:07:00.142 END TEST thread 00:07:00.142 ************************************ 00:07:00.142 00:07:00.142 real 0m3.014s 00:07:00.142 user 0m2.410s 00:07:00.142 sys 0m0.406s 00:07:00.142 22:51:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.142 22:51:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.142 22:51:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:00.142 22:51:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.142 22:51:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.142 22:51:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.142 22:51:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.142 ************************************ 00:07:00.142 START TEST app_cmdline 00:07:00.142 ************************************ 00:07:00.142 22:51:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.142 * Looking for test storage... 00:07:00.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.142 22:51:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.142 22:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.142 22:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.402 22:51:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.402 --rc genhtml_branch_coverage=1 00:07:00.402 --rc genhtml_function_coverage=1 00:07:00.402 --rc genhtml_legend=1 00:07:00.402 --rc geninfo_all_blocks=1 00:07:00.402 --rc geninfo_unexecuted_blocks=1 00:07:00.402 00:07:00.402 ' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.402 --rc genhtml_branch_coverage=1 00:07:00.402 --rc genhtml_function_coverage=1 00:07:00.402 --rc genhtml_legend=1 00:07:00.402 --rc geninfo_all_blocks=1 00:07:00.402 --rc geninfo_unexecuted_blocks=1 00:07:00.402 00:07:00.402 ' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.402 --rc genhtml_branch_coverage=1 00:07:00.402 --rc genhtml_function_coverage=1 00:07:00.402 --rc genhtml_legend=1 00:07:00.402 --rc geninfo_all_blocks=1 00:07:00.402 --rc geninfo_unexecuted_blocks=1 00:07:00.402 00:07:00.402 ' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.402 --rc genhtml_branch_coverage=1 00:07:00.402 --rc genhtml_function_coverage=1 00:07:00.402 --rc genhtml_legend=1 00:07:00.402 --rc geninfo_all_blocks=1 00:07:00.402 --rc geninfo_unexecuted_blocks=1 00:07:00.402 00:07:00.402 ' 00:07:00.402 22:51:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.402 22:51:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73031 00:07:00.402 22:51:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.402 22:51:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73031 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73031 ']' 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.402 22:51:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.402 [2024-11-26 22:51:39.408811] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:00.402 [2024-11-26 22:51:39.409006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73031 ] 00:07:00.662 [2024-11-26 22:51:39.542839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.662 [2024-11-26 22:51:39.582525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.662 [2024-11-26 22:51:39.607511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.230 22:51:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.230 22:51:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:01.230 22:51:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:01.489 { 00:07:01.489 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:07:01.489 "fields": { 00:07:01.489 "major": 25, 00:07:01.489 "minor": 1, 00:07:01.489 "patch": 0, 00:07:01.489 "suffix": "-pre", 00:07:01.489 "commit": "2f2acf4eb" 00:07:01.489 } 00:07:01.489 } 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.489 22:51:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.489 22:51:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.489 22:51:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.489 22:51:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.490 22:51:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.490 22:51:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.490 22:51:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:01.490 22:51:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.750 request: 00:07:01.750 { 00:07:01.750 "method": "env_dpdk_get_mem_stats", 00:07:01.750 "req_id": 1 00:07:01.750 } 00:07:01.750 Got JSON-RPC error response 00:07:01.750 response: 00:07:01.750 { 00:07:01.750 "code": -32601, 00:07:01.750 "message": "Method not found" 00:07:01.750 } 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.750 22:51:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73031 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73031 ']' 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73031 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73031 00:07:01.750 killing process with pid 73031 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73031' 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 73031 00:07:01.750 22:51:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 73031 00:07:02.010 ************************************ 00:07:02.010 END TEST app_cmdline 00:07:02.010 ************************************ 00:07:02.010 00:07:02.010 real 0m2.045s 00:07:02.010 user 0m2.312s 00:07:02.010 sys 0m0.560s 00:07:02.010 22:51:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.010 22:51:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.270 22:51:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.270 22:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.270 22:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.270 22:51:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.270 ************************************ 00:07:02.270 START TEST version 00:07:02.270 ************************************ 00:07:02.270 22:51:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.270 * Looking for test storage... 00:07:02.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.270 22:51:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.270 22:51:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.270 22:51:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.530 22:51:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.530 22:51:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.530 22:51:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.530 22:51:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.530 22:51:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.530 22:51:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.530 22:51:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.530 22:51:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.530 22:51:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.530 22:51:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.530 22:51:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.530 22:51:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.530 22:51:41 version -- scripts/common.sh@344 -- # case "$op" in 00:07:02.530 22:51:41 version -- scripts/common.sh@345 -- # : 1 00:07:02.530 22:51:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.530 22:51:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.530 22:51:41 version -- scripts/common.sh@365 -- # decimal 1 00:07:02.530 22:51:41 version -- scripts/common.sh@353 -- # local d=1 00:07:02.530 22:51:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.530 22:51:41 version -- scripts/common.sh@355 -- # echo 1 00:07:02.530 22:51:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.530 22:51:41 version -- scripts/common.sh@366 -- # decimal 2 00:07:02.530 22:51:41 version -- scripts/common.sh@353 -- # local d=2 00:07:02.530 22:51:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.530 22:51:41 version -- scripts/common.sh@355 -- # echo 2 00:07:02.530 22:51:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.530 22:51:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.530 22:51:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.530 22:51:41 version -- scripts/common.sh@368 -- # return 0 00:07:02.530 22:51:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.531 22:51:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.531 --rc genhtml_branch_coverage=1 00:07:02.531 --rc genhtml_function_coverage=1 00:07:02.531 --rc genhtml_legend=1 00:07:02.531 --rc geninfo_all_blocks=1 00:07:02.531 --rc geninfo_unexecuted_blocks=1 00:07:02.531 00:07:02.531 ' 00:07:02.531 22:51:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.531 --rc genhtml_branch_coverage=1 00:07:02.531 --rc genhtml_function_coverage=1 00:07:02.531 --rc genhtml_legend=1 00:07:02.531 --rc geninfo_all_blocks=1 00:07:02.531 --rc geninfo_unexecuted_blocks=1 00:07:02.531 00:07:02.531 ' 00:07:02.531 22:51:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.531 --rc genhtml_branch_coverage=1 00:07:02.531 --rc genhtml_function_coverage=1 00:07:02.531 --rc genhtml_legend=1 00:07:02.531 --rc geninfo_all_blocks=1 00:07:02.531 --rc geninfo_unexecuted_blocks=1 00:07:02.531 00:07:02.531 ' 00:07:02.531 22:51:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.531 --rc genhtml_branch_coverage=1 00:07:02.531 --rc genhtml_function_coverage=1 00:07:02.531 --rc genhtml_legend=1 00:07:02.531 --rc geninfo_all_blocks=1 00:07:02.531 --rc geninfo_unexecuted_blocks=1 00:07:02.531 00:07:02.531 ' 00:07:02.531 22:51:41 version -- app/version.sh@17 -- # get_header_version major 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # cut -f2 00:07:02.531 22:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.531 22:51:41 version -- app/version.sh@17 -- # major=25 00:07:02.531 22:51:41 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.531 22:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # cut -f2 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.531 22:51:41 version -- app/version.sh@18 -- # minor=1 00:07:02.531 22:51:41 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.531 22:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # cut -f2 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.531 22:51:41 version -- app/version.sh@19 -- # patch=0 00:07:02.531 22:51:41 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.531 22:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # cut -f2 00:07:02.531 22:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.531 22:51:41 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.531 22:51:41 version -- app/version.sh@22 -- # version=25.1 00:07:02.531 22:51:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.531 22:51:41 version -- app/version.sh@28 -- # version=25.1rc0 00:07:02.531 22:51:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:02.531 22:51:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.531 22:51:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:02.531 22:51:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:02.531 00:07:02.531 real 0m0.325s 00:07:02.531 user 0m0.201s 00:07:02.531 sys 0m0.180s 00:07:02.531 ************************************ 00:07:02.531 END TEST version 00:07:02.531 ************************************ 00:07:02.531 22:51:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.531 22:51:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 22:51:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:02.531 22:51:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:02.531 22:51:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:02.531 22:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.531 22:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.531 22:51:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 ************************************ 00:07:02.531 START TEST bdev_raid 00:07:02.531 ************************************ 00:07:02.531 22:51:41 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:02.791 * Looking for test storage... 00:07:02.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.791 22:51:41 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.791 --rc genhtml_branch_coverage=1 00:07:02.791 --rc genhtml_function_coverage=1 00:07:02.791 --rc genhtml_legend=1 00:07:02.791 --rc geninfo_all_blocks=1 00:07:02.791 --rc geninfo_unexecuted_blocks=1 00:07:02.791 00:07:02.791 ' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.791 --rc genhtml_branch_coverage=1 00:07:02.791 --rc genhtml_function_coverage=1 00:07:02.791 --rc genhtml_legend=1 00:07:02.791 --rc geninfo_all_blocks=1 00:07:02.791 --rc geninfo_unexecuted_blocks=1 00:07:02.791 00:07:02.791 ' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.791 --rc genhtml_branch_coverage=1 00:07:02.791 --rc genhtml_function_coverage=1 00:07:02.791 --rc genhtml_legend=1 00:07:02.791 --rc geninfo_all_blocks=1 00:07:02.791 --rc geninfo_unexecuted_blocks=1 00:07:02.791 00:07:02.791 ' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.791 --rc genhtml_branch_coverage=1 00:07:02.791 --rc genhtml_function_coverage=1 00:07:02.791 --rc genhtml_legend=1 00:07:02.791 --rc geninfo_all_blocks=1 00:07:02.791 --rc geninfo_unexecuted_blocks=1 00:07:02.791 00:07:02.791 ' 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.791 22:51:41 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:02.791 22:51:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.791 22:51:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.791 ************************************ 00:07:02.791 START TEST raid1_resize_data_offset_test 00:07:02.791 ************************************ 00:07:02.791 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:02.791 22:51:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73191 00:07:02.791 22:51:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.791 22:51:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73191' 00:07:02.791 Process raid pid: 73191 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73191 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73191 ']' 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.792 22:51:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.069 [2024-11-26 22:51:41.931493] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:03.069 [2024-11-26 22:51:41.931663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.069 [2024-11-26 22:51:42.067498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.069 [2024-11-26 22:51:42.104376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.069 [2024-11-26 22:51:42.130673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.069 [2024-11-26 22:51:42.172693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.069 [2024-11-26 22:51:42.172812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.651 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.651 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:03.651 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:03.651 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.651 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.910 malloc0 00:07:03.910 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.910 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:03.910 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 malloc1 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 null0 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 [2024-11-26 22:51:42.833958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:03.911 [2024-11-26 22:51:42.835757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:03.911 [2024-11-26 22:51:42.835805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:03.911 [2024-11-26 22:51:42.835935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:03.911 [2024-11-26 22:51:42.835949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:03.911 [2024-11-26 22:51:42.836192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:03.911 [2024-11-26 22:51:42.836317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:03.911 [2024-11-26 22:51:42.836327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:03.911 [2024-11-26 22:51:42.836433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 [2024-11-26 22:51:42.893951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 malloc2 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 [2024-11-26 22:51:43.018187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:03.911 [2024-11-26 22:51:43.023297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.911 [2024-11-26 22:51:43.025218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.911 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73191 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73191 ']' 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73191 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73191 00:07:04.171 killing process with pid 73191 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73191' 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73191 00:07:04.171 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73191 00:07:04.171 [2024-11-26 22:51:43.106210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.171 [2024-11-26 22:51:43.107193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:04.171 [2024-11-26 22:51:43.107283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.171 [2024-11-26 22:51:43.107303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:04.171 [2024-11-26 22:51:43.113255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.171 [2024-11-26 22:51:43.113542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.171 [2024-11-26 22:51:43.113567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:04.431 [2024-11-26 22:51:43.319741] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.431 ************************************ 00:07:04.431 END TEST raid1_resize_data_offset_test 00:07:04.431 ************************************ 00:07:04.431 22:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:04.431 00:07:04.431 real 0m1.691s 00:07:04.431 user 0m1.673s 00:07:04.431 sys 0m0.452s 00:07:04.431 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.431 22:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.691 22:51:43 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:04.691 22:51:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.691 22:51:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.691 22:51:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.691 ************************************ 00:07:04.691 START TEST raid0_resize_superblock_test 00:07:04.691 ************************************ 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73247 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73247' 00:07:04.691 Process raid pid: 73247 00:07:04.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73247 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73247 ']' 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.691 22:51:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.691 [2024-11-26 22:51:43.690685] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:04.691 [2024-11-26 22:51:43.690852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.951 [2024-11-26 22:51:43.825529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.951 [2024-11-26 22:51:43.862807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.951 [2024-11-26 22:51:43.887685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.951 [2024-11-26 22:51:43.928886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.951 [2024-11-26 22:51:43.928996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.519 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.519 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.519 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:05.519 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.519 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 malloc0 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 [2024-11-26 22:51:44.668094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.778 [2024-11-26 22:51:44.668199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.778 [2024-11-26 22:51:44.668275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:05.778 [2024-11-26 22:51:44.668307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.778 [2024-11-26 22:51:44.670324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.778 [2024-11-26 22:51:44.670392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.778 pt0 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 dea93ee7-8a50-4d35-bbef-c110f4b6fd5f 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 271d61bf-11ff-45ae-9f97-f930bdcbc90c 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 f5381bab-3dd8-44b1-97a9-98d51e3434dd 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 [2024-11-26 22:51:44.801881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 271d61bf-11ff-45ae-9f97-f930bdcbc90c is claimed 00:07:05.778 [2024-11-26 22:51:44.801958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5381bab-3dd8-44b1-97a9-98d51e3434dd is claimed 00:07:05.778 [2024-11-26 22:51:44.802073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:05.778 [2024-11-26 22:51:44.802084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:05.778 [2024-11-26 22:51:44.802368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:05.778 [2024-11-26 22:51:44.802535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:05.778 [2024-11-26 22:51:44.802552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:05.778 [2024-11-26 22:51:44.802671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:05.778 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.779 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.779 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.779 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.779 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 [2024-11-26 22:51:44.914112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 [2024-11-26 22:51:44.958073] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.038 [2024-11-26 22:51:44.958104] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '271d61bf-11ff-45ae-9f97-f930bdcbc90c' was resized: old size 131072, new size 204800 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 [2024-11-26 22:51:44.970025] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.038 [2024-11-26 22:51:44.970052] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f5381bab-3dd8-44b1-97a9-98d51e3434dd' was resized: old size 131072, new size 204800 00:07:06.038 [2024-11-26 22:51:44.970078] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:06.038 [2024-11-26 22:51:45.062185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 [2024-11-26 22:51:45.110011] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:06.038 [2024-11-26 22:51:45.110101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:06.038 [2024-11-26 22:51:45.110120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.038 [2024-11-26 22:51:45.110141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:06.038 [2024-11-26 22:51:45.110286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.038 [2024-11-26 22:51:45.110326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.038 [2024-11-26 22:51:45.110335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 [2024-11-26 22:51:45.121926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:06.038 [2024-11-26 22:51:45.121977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.038 [2024-11-26 22:51:45.121999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:06.038 [2024-11-26 22:51:45.122009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.038 [2024-11-26 22:51:45.124164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.038 [2024-11-26 22:51:45.124241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:06.038 [2024-11-26 22:51:45.125699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 271d61bf-11ff-45ae-9f97-f930bdcbc90c 00:07:06.038 [2024-11-26 22:51:45.125756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 271d61bf-11ff-45ae-9f97-f930bdcbc90c is claimed 00:07:06.038 [2024-11-26 22:51:45.125849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f5381bab-3dd8-44b1-97a9-98d51e3434dd 00:07:06.038 [2024-11-26 22:51:45.125866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5381bab-3dd8-44b1-97a9-98d51e3434dd is claimed 00:07:06.038 [2024-11-26 22:51:45.125949] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f5381bab-3dd8-44b1-97a9-98d51e3434dd (2) smaller than existing raid bdev Raid (3) 00:07:06.038 [2024-11-26 22:51:45.125965] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 271d61bf-11ff-45ae-9f97-f930bdcbc90c: File exists 00:07:06.038 [2024-11-26 22:51:45.126006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.038 [2024-11-26 22:51:45.126013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:06.038 [2024-11-26 22:51:45.126271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:06.038 [2024-11-26 22:51:45.126389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.038 [2024-11-26 22:51:45.126406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:06.038 [2024-11-26 22:51:45.126545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.038 pt0 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:06.038 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:06.039 [2024-11-26 22:51:45.146286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.039 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73247 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73247 ']' 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73247 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73247 00:07:06.297 killing process with pid 73247 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73247' 00:07:06.297 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73247 00:07:06.297 [2024-11-26 22:51:45.239137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.297 [2024-11-26 22:51:45.239200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.298 [2024-11-26 22:51:45.239236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.298 [2024-11-26 22:51:45.239260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:06.298 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73247 00:07:06.298 [2024-11-26 22:51:45.395356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.557 22:51:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:06.557 00:07:06.557 real 0m2.008s 00:07:06.557 user 0m2.267s 00:07:06.557 sys 0m0.516s 00:07:06.557 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.557 22:51:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.557 ************************************ 00:07:06.557 END TEST raid0_resize_superblock_test 00:07:06.557 ************************************ 00:07:06.557 22:51:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:06.557 22:51:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.557 22:51:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.557 22:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.817 ************************************ 00:07:06.817 START TEST raid1_resize_superblock_test 00:07:06.817 ************************************ 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73318 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.817 Process raid pid: 73318 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73318' 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73318 00:07:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73318 ']' 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.817 22:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.817 [2024-11-26 22:51:45.782730] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:06.817 [2024-11-26 22:51:45.782951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.817 [2024-11-26 22:51:45.922843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.077 [2024-11-26 22:51:45.954810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.077 [2024-11-26 22:51:45.980716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.077 [2024-11-26 22:51:46.022108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.077 [2024-11-26 22:51:46.022241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.646 malloc0 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.646 [2024-11-26 22:51:46.708756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.646 [2024-11-26 22:51:46.708907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.646 [2024-11-26 22:51:46.708964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.646 [2024-11-26 22:51:46.708995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.646 [2024-11-26 22:51:46.711053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.646 [2024-11-26 22:51:46.711124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.646 pt0 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.646 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 1d958247-36e9-4bb5-9f16-9858fc133faf 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 3df52d40-579b-43db-8586-89eeaff5dbb3 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 [2024-11-26 22:51:46.843435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb is claimed 00:07:07.905 [2024-11-26 22:51:46.843623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3df52d40-579b-43db-8586-89eeaff5dbb3 is claimed 00:07:07.905 [2024-11-26 22:51:46.843768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:07.905 [2024-11-26 22:51:46.843835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:07.905 [2024-11-26 22:51:46.844092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:07.905 [2024-11-26 22:51:46.844286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:07.905 [2024-11-26 22:51:46.844335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:07.905 [2024-11-26 22:51:46.844489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:07.905 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:07.906 [2024-11-26 22:51:46.959668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.906 22:51:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.906 [2024-11-26 22:51:47.011604] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.906 [2024-11-26 22:51:47.011683] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb' was resized: old size 131072, new size 204800 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.906 [2024-11-26 22:51:47.023551] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.906 [2024-11-26 22:51:47.023617] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3df52d40-579b-43db-8586-89eeaff5dbb3' was resized: old size 131072, new size 204800 00:07:07.906 [2024-11-26 22:51:47.023643] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:07.906 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:08.166 [2024-11-26 22:51:47.131694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 [2024-11-26 22:51:47.179550] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:08.166 [2024-11-26 22:51:47.179670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:08.166 [2024-11-26 22:51:47.179724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:08.166 [2024-11-26 22:51:47.179898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.166 [2024-11-26 22:51:47.180083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.166 [2024-11-26 22:51:47.180174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.166 [2024-11-26 22:51:47.180221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 [2024-11-26 22:51:47.191494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:08.166 [2024-11-26 22:51:47.191593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.166 [2024-11-26 22:51:47.191630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:08.166 [2024-11-26 22:51:47.191658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.166 [2024-11-26 22:51:47.193656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.166 [2024-11-26 22:51:47.193725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:08.166 [2024-11-26 22:51:47.195077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb 00:07:08.166 [2024-11-26 22:51:47.195172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb is claimed 00:07:08.166 [2024-11-26 22:51:47.195311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3df52d40-579b-43db-8586-89eeaff5dbb3 00:07:08.166 [2024-11-26 22:51:47.195373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3df52d40-579b-43db-8586-89eeaff5dbb3 is claimed 00:07:08.166 [2024-11-26 22:51:47.195508] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3df52d40-579b-43db-8586-89eeaff5dbb3 (2) smaller than existing raid bdev Raid (3) 00:07:08.166 [2024-11-26 22:51:47.195574] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fe804a99-d189-4a5e-abc2-d3f1f6b6e9eb: File exists 00:07:08.166 [2024-11-26 22:51:47.195649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.166 [2024-11-26 22:51:47.195678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:08.166 [2024-11-26 22:51:47.195910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:08.166 [2024-11-26 22:51:47.196057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.166 [2024-11-26 22:51:47.196096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:08.166 [2024-11-26 22:51:47.196279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.166 pt0 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:08.167 [2024-11-26 22:51:47.219801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73318 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73318 ']' 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73318 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.167 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73318 00:07:08.426 killing process with pid 73318 00:07:08.426 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.426 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.426 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73318' 00:07:08.426 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73318 00:07:08.426 [2024-11-26 22:51:47.304194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.426 [2024-11-26 22:51:47.304271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.426 [2024-11-26 22:51:47.304322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.426 [2024-11-26 22:51:47.304336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:08.426 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73318 00:07:08.426 [2024-11-26 22:51:47.461343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.685 22:51:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:08.685 00:07:08.685 real 0m1.993s 00:07:08.685 user 0m2.241s 00:07:08.685 sys 0m0.529s 00:07:08.685 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.685 22:51:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.685 ************************************ 00:07:08.685 END TEST raid1_resize_superblock_test 00:07:08.685 ************************************ 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:08.685 22:51:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:08.685 22:51:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.685 22:51:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.685 22:51:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.685 ************************************ 00:07:08.685 START TEST raid_function_test_raid0 00:07:08.685 ************************************ 00:07:08.685 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:08.685 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:08.685 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.685 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.686 Process raid pid: 73396 00:07:08.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73396 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73396' 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73396 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73396 ']' 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.686 22:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.945 [2024-11-26 22:51:47.879072] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:08.945 [2024-11-26 22:51:47.879335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.945 [2024-11-26 22:51:48.019563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.945 [2024-11-26 22:51:48.048141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.203 [2024-11-26 22:51:48.074066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.203 [2024-11-26 22:51:48.116475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.203 [2024-11-26 22:51:48.116590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.770 Base_1 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.770 Base_2 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.770 [2024-11-26 22:51:48.725916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.770 [2024-11-26 22:51:48.727734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.770 [2024-11-26 22:51:48.727833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:09.770 [2024-11-26 22:51:48.727869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.770 [2024-11-26 22:51:48.728131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:09.770 [2024-11-26 22:51:48.728298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:09.770 [2024-11-26 22:51:48.728342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:09.770 [2024-11-26 22:51:48.728491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.770 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:10.029 [2024-11-26 22:51:48.970009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:10.029 /dev/nbd0 00:07:10.029 22:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.029 1+0 records in 00:07:10.029 1+0 records out 00:07:10.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413284 s, 9.9 MB/s 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.029 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.289 { 00:07:10.289 "nbd_device": "/dev/nbd0", 00:07:10.289 "bdev_name": "raid" 00:07:10.289 } 00:07:10.289 ]' 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.289 { 00:07:10.289 "nbd_device": "/dev/nbd0", 00:07:10.289 "bdev_name": "raid" 00:07:10.289 } 00:07:10.289 ]' 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.289 4096+0 records in 00:07:10.289 4096+0 records out 00:07:10.289 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0336753 s, 62.3 MB/s 00:07:10.289 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.549 4096+0 records in 00:07:10.549 4096+0 records out 00:07:10.549 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.205696 s, 10.2 MB/s 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.549 128+0 records in 00:07:10.549 128+0 records out 00:07:10.549 65536 bytes (66 kB, 64 KiB) copied, 0.00124984 s, 52.4 MB/s 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.549 2035+0 records in 00:07:10.549 2035+0 records out 00:07:10.549 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0127523 s, 81.7 MB/s 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.549 456+0 records in 00:07:10.549 456+0 records out 00:07:10.549 233472 bytes (233 kB, 228 KiB) copied, 0.00410647 s, 56.9 MB/s 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.549 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.809 [2024-11-26 22:51:49.897003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.809 22:51:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73396 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73396 ']' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73396 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.069 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73396 00:07:11.329 killing process with pid 73396 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73396' 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73396 00:07:11.329 [2024-11-26 22:51:50.202712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.329 [2024-11-26 22:51:50.202817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73396 00:07:11.329 [2024-11-26 22:51:50.202870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.329 [2024-11-26 22:51:50.202886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:11.329 [2024-11-26 22:51:50.226007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.329 ************************************ 00:07:11.329 END TEST raid_function_test_raid0 00:07:11.329 ************************************ 00:07:11.329 22:51:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:11.329 00:07:11.329 real 0m2.661s 00:07:11.329 user 0m3.253s 00:07:11.329 sys 0m0.937s 00:07:11.330 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.330 22:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 22:51:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:11.590 22:51:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.590 22:51:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.590 22:51:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 ************************************ 00:07:11.590 START TEST raid_function_test_concat 00:07:11.590 ************************************ 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73509 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73509' 00:07:11.590 Process raid pid: 73509 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73509 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73509 ']' 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.590 22:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 [2024-11-26 22:51:50.600048] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:11.590 [2024-11-26 22:51:50.600261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.849 [2024-11-26 22:51:50.736585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.849 [2024-11-26 22:51:50.776433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.849 [2024-11-26 22:51:50.804999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.849 [2024-11-26 22:51:50.847957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.849 [2024-11-26 22:51:50.848100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.418 Base_1 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.418 Base_2 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.418 [2024-11-26 22:51:51.462735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.418 [2024-11-26 22:51:51.464687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.418 [2024-11-26 22:51:51.464797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:12.418 [2024-11-26 22:51:51.464837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.418 [2024-11-26 22:51:51.465125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:12.418 [2024-11-26 22:51:51.465311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:12.418 [2024-11-26 22:51:51.465359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:12.418 [2024-11-26 22:51:51.465528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:12.418 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:12.677 [2024-11-26 22:51:51.698847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:12.677 /dev/nbd0 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.677 1+0 records in 00:07:12.677 1+0 records out 00:07:12.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240396 s, 17.0 MB/s 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.677 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.936 { 00:07:12.936 "nbd_device": "/dev/nbd0", 00:07:12.936 "bdev_name": "raid" 00:07:12.936 } 00:07:12.936 ]' 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.936 { 00:07:12.936 "nbd_device": "/dev/nbd0", 00:07:12.936 "bdev_name": "raid" 00:07:12.936 } 00:07:12.936 ]' 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:12.936 22:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:12.936 4096+0 records in 00:07:12.936 4096+0 records out 00:07:12.936 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326145 s, 64.3 MB/s 00:07:12.936 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:13.193 4096+0 records in 00:07:13.193 4096+0 records out 00:07:13.193 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.182242 s, 11.5 MB/s 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:13.193 128+0 records in 00:07:13.193 128+0 records out 00:07:13.193 65536 bytes (66 kB, 64 KiB) copied, 0.001333 s, 49.2 MB/s 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:13.193 2035+0 records in 00:07:13.193 2035+0 records out 00:07:13.193 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154107 s, 67.6 MB/s 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:13.193 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:13.452 456+0 records in 00:07:13.452 456+0 records out 00:07:13.452 233472 bytes (233 kB, 228 KiB) copied, 0.0039333 s, 59.4 MB/s 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.452 [2024-11-26 22:51:52.572814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.452 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.711 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73509 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73509 ']' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73509 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73509 00:07:13.969 killing process with pid 73509 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73509' 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73509 00:07:13.969 [2024-11-26 22:51:52.898297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.969 22:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73509 00:07:13.969 [2024-11-26 22:51:52.898459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.969 [2024-11-26 22:51:52.898530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.969 [2024-11-26 22:51:52.898541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:13.969 [2024-11-26 22:51:52.942145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.229 22:51:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:14.229 00:07:14.229 real 0m2.762s 00:07:14.229 user 0m3.374s 00:07:14.229 sys 0m0.913s 00:07:14.229 22:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.229 ************************************ 00:07:14.229 END TEST raid_function_test_concat 00:07:14.229 ************************************ 00:07:14.229 22:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:14.229 22:51:53 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:14.229 22:51:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.229 22:51:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.229 22:51:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.229 ************************************ 00:07:14.229 START TEST raid0_resize_test 00:07:14.229 ************************************ 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:14.229 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73625 00:07:14.230 Process raid pid: 73625 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73625' 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73625 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73625 ']' 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.230 22:51:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.489 [2024-11-26 22:51:53.433412] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:14.489 [2024-11-26 22:51:53.433531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.489 [2024-11-26 22:51:53.570696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.489 [2024-11-26 22:51:53.610289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.748 [2024-11-26 22:51:53.652793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.748 [2024-11-26 22:51:53.730124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.748 [2024-11-26 22:51:53.730309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 Base_1 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 Base_2 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 [2024-11-26 22:51:54.305988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:15.355 [2024-11-26 22:51:54.308166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:15.355 [2024-11-26 22:51:54.308285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:15.355 [2024-11-26 22:51:54.308329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.355 [2024-11-26 22:51:54.308612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:15.355 [2024-11-26 22:51:54.308744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:15.355 [2024-11-26 22:51:54.308786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:15.355 [2024-11-26 22:51:54.308952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 [2024-11-26 22:51:54.317924] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.355 [2024-11-26 22:51:54.317984] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:15.355 true 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 [2024-11-26 22:51:54.334183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:15.355 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.356 [2024-11-26 22:51:54.373944] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.356 [2024-11-26 22:51:54.374009] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:15.356 [2024-11-26 22:51:54.374076] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:15.356 true 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.356 [2024-11-26 22:51:54.390116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73625 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73625 ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 73625 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73625 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73625' 00:07:15.356 killing process with pid 73625 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 73625 00:07:15.356 [2024-11-26 22:51:54.459586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.356 [2024-11-26 22:51:54.459702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.356 [2024-11-26 22:51:54.459766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.356 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 73625 00:07:15.356 [2024-11-26 22:51:54.459826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:15.356 [2024-11-26 22:51:54.461963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.927 22:51:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:15.927 00:07:15.927 real 0m1.448s 00:07:15.927 user 0m1.556s 00:07:15.927 sys 0m0.361s 00:07:15.927 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.927 22:51:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.927 ************************************ 00:07:15.927 END TEST raid0_resize_test 00:07:15.927 ************************************ 00:07:15.927 22:51:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:15.927 22:51:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.927 22:51:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.927 22:51:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.927 ************************************ 00:07:15.927 START TEST raid1_resize_test 00:07:15.927 ************************************ 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73671 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73671' 00:07:15.927 Process raid pid: 73671 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73671 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73671 ']' 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.927 22:51:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.927 [2024-11-26 22:51:54.961140] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:15.927 [2024-11-26 22:51:54.961287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.188 [2024-11-26 22:51:55.097909] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.188 [2024-11-26 22:51:55.137489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.188 [2024-11-26 22:51:55.177465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.188 [2024-11-26 22:51:55.254479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.188 [2024-11-26 22:51:55.254530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 Base_1 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 Base_2 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 [2024-11-26 22:51:55.797442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:16.759 [2024-11-26 22:51:55.799620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:16.759 [2024-11-26 22:51:55.799718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:16.759 [2024-11-26 22:51:55.799751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:16.759 [2024-11-26 22:51:55.800043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:16.759 [2024-11-26 22:51:55.800189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:16.759 [2024-11-26 22:51:55.800230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:16.759 [2024-11-26 22:51:55.800411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 [2024-11-26 22:51:55.809401] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.759 [2024-11-26 22:51:55.809460] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:16.759 true 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 [2024-11-26 22:51:55.825611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 [2024-11-26 22:51:55.873416] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.759 [2024-11-26 22:51:55.873478] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:16.759 [2024-11-26 22:51:55.873546] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:16.759 true 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.759 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.020 [2024-11-26 22:51:55.889586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73671 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73671 ']' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 73671 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73671 00:07:17.020 killing process with pid 73671 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73671' 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 73671 00:07:17.020 [2024-11-26 22:51:55.976689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.020 [2024-11-26 22:51:55.976762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.020 22:51:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 73671 00:07:17.020 [2024-11-26 22:51:55.977227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.020 [2024-11-26 22:51:55.977247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:17.020 [2024-11-26 22:51:55.979008] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.281 22:51:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:17.281 00:07:17.281 real 0m1.445s 00:07:17.281 user 0m1.530s 00:07:17.281 sys 0m0.387s 00:07:17.281 22:51:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.281 22:51:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.281 ************************************ 00:07:17.281 END TEST raid1_resize_test 00:07:17.281 ************************************ 00:07:17.281 22:51:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:17.281 22:51:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:17.281 22:51:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:17.281 22:51:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.281 22:51:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.281 22:51:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.281 ************************************ 00:07:17.281 START TEST raid_state_function_test 00:07:17.281 ************************************ 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:17.281 Process raid pid: 73722 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73722 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73722' 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73722 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73722 ']' 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.281 22:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.542 [2024-11-26 22:51:56.475129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:17.542 [2024-11-26 22:51:56.475250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.542 [2024-11-26 22:51:56.609057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.542 [2024-11-26 22:51:56.629065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.802 [2024-11-26 22:51:56.668590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.803 [2024-11-26 22:51:56.745740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.803 [2024-11-26 22:51:56.745780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.373 [2024-11-26 22:51:57.302610] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.373 [2024-11-26 22:51:57.302769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.373 [2024-11-26 22:51:57.302819] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.373 [2024-11-26 22:51:57.302842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.373 "name": "Existed_Raid", 00:07:18.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.373 "strip_size_kb": 64, 00:07:18.373 "state": "configuring", 00:07:18.373 "raid_level": "raid0", 00:07:18.373 "superblock": false, 00:07:18.373 "num_base_bdevs": 2, 00:07:18.373 "num_base_bdevs_discovered": 0, 00:07:18.373 "num_base_bdevs_operational": 2, 00:07:18.373 "base_bdevs_list": [ 00:07:18.373 { 00:07:18.373 "name": "BaseBdev1", 00:07:18.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.373 "is_configured": false, 00:07:18.373 "data_offset": 0, 00:07:18.373 "data_size": 0 00:07:18.373 }, 00:07:18.373 { 00:07:18.373 "name": "BaseBdev2", 00:07:18.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.373 "is_configured": false, 00:07:18.373 "data_offset": 0, 00:07:18.373 "data_size": 0 00:07:18.373 } 00:07:18.373 ] 00:07:18.373 }' 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.373 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.634 [2024-11-26 22:51:57.726611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.634 [2024-11-26 22:51:57.726710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.634 [2024-11-26 22:51:57.738595] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.634 [2024-11-26 22:51:57.738690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.634 [2024-11-26 22:51:57.738720] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.634 [2024-11-26 22:51:57.738743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.634 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.894 [2024-11-26 22:51:57.765549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.894 BaseBdev1 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.894 [ 00:07:18.894 { 00:07:18.894 "name": "BaseBdev1", 00:07:18.894 "aliases": [ 00:07:18.894 "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03" 00:07:18.894 ], 00:07:18.894 "product_name": "Malloc disk", 00:07:18.894 "block_size": 512, 00:07:18.894 "num_blocks": 65536, 00:07:18.894 "uuid": "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03", 00:07:18.894 "assigned_rate_limits": { 00:07:18.894 "rw_ios_per_sec": 0, 00:07:18.894 "rw_mbytes_per_sec": 0, 00:07:18.894 "r_mbytes_per_sec": 0, 00:07:18.894 "w_mbytes_per_sec": 0 00:07:18.894 }, 00:07:18.894 "claimed": true, 00:07:18.894 "claim_type": "exclusive_write", 00:07:18.894 "zoned": false, 00:07:18.894 "supported_io_types": { 00:07:18.894 "read": true, 00:07:18.894 "write": true, 00:07:18.894 "unmap": true, 00:07:18.894 "flush": true, 00:07:18.894 "reset": true, 00:07:18.894 "nvme_admin": false, 00:07:18.894 "nvme_io": false, 00:07:18.894 "nvme_io_md": false, 00:07:18.894 "write_zeroes": true, 00:07:18.894 "zcopy": true, 00:07:18.894 "get_zone_info": false, 00:07:18.894 "zone_management": false, 00:07:18.894 "zone_append": false, 00:07:18.894 "compare": false, 00:07:18.894 "compare_and_write": false, 00:07:18.894 "abort": true, 00:07:18.894 "seek_hole": false, 00:07:18.894 "seek_data": false, 00:07:18.894 "copy": true, 00:07:18.894 "nvme_iov_md": false 00:07:18.894 }, 00:07:18.894 "memory_domains": [ 00:07:18.894 { 00:07:18.894 "dma_device_id": "system", 00:07:18.894 "dma_device_type": 1 00:07:18.894 }, 00:07:18.894 { 00:07:18.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.894 "dma_device_type": 2 00:07:18.894 } 00:07:18.894 ], 00:07:18.894 "driver_specific": {} 00:07:18.894 } 00:07:18.894 ] 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.894 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.895 "name": "Existed_Raid", 00:07:18.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.895 "strip_size_kb": 64, 00:07:18.895 "state": "configuring", 00:07:18.895 "raid_level": "raid0", 00:07:18.895 "superblock": false, 00:07:18.895 "num_base_bdevs": 2, 00:07:18.895 "num_base_bdevs_discovered": 1, 00:07:18.895 "num_base_bdevs_operational": 2, 00:07:18.895 "base_bdevs_list": [ 00:07:18.895 { 00:07:18.895 "name": "BaseBdev1", 00:07:18.895 "uuid": "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03", 00:07:18.895 "is_configured": true, 00:07:18.895 "data_offset": 0, 00:07:18.895 "data_size": 65536 00:07:18.895 }, 00:07:18.895 { 00:07:18.895 "name": "BaseBdev2", 00:07:18.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.895 "is_configured": false, 00:07:18.895 "data_offset": 0, 00:07:18.895 "data_size": 0 00:07:18.895 } 00:07:18.895 ] 00:07:18.895 }' 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.895 22:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.158 [2024-11-26 22:51:58.233745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.158 [2024-11-26 22:51:58.233894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.158 [2024-11-26 22:51:58.245791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.158 [2024-11-26 22:51:58.248054] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.158 [2024-11-26 22:51:58.248152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.158 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.418 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.418 "name": "Existed_Raid", 00:07:19.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.418 "strip_size_kb": 64, 00:07:19.418 "state": "configuring", 00:07:19.418 "raid_level": "raid0", 00:07:19.418 "superblock": false, 00:07:19.418 "num_base_bdevs": 2, 00:07:19.418 "num_base_bdevs_discovered": 1, 00:07:19.418 "num_base_bdevs_operational": 2, 00:07:19.418 "base_bdevs_list": [ 00:07:19.418 { 00:07:19.418 "name": "BaseBdev1", 00:07:19.418 "uuid": "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03", 00:07:19.418 "is_configured": true, 00:07:19.418 "data_offset": 0, 00:07:19.418 "data_size": 65536 00:07:19.418 }, 00:07:19.418 { 00:07:19.418 "name": "BaseBdev2", 00:07:19.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.418 "is_configured": false, 00:07:19.418 "data_offset": 0, 00:07:19.418 "data_size": 0 00:07:19.418 } 00:07:19.418 ] 00:07:19.418 }' 00:07:19.418 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.418 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.678 [2024-11-26 22:51:58.646835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.678 [2024-11-26 22:51:58.646960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:19.678 [2024-11-26 22:51:58.646991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.678 BaseBdev2 00:07:19.678 [2024-11-26 22:51:58.647356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:19.678 [2024-11-26 22:51:58.647550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:19.678 [2024-11-26 22:51:58.647561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:19.678 [2024-11-26 22:51:58.647794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.678 [ 00:07:19.678 { 00:07:19.678 "name": "BaseBdev2", 00:07:19.678 "aliases": [ 00:07:19.678 "c34fbfdd-8d90-447d-a5c2-11dd0e58e149" 00:07:19.678 ], 00:07:19.678 "product_name": "Malloc disk", 00:07:19.678 "block_size": 512, 00:07:19.678 "num_blocks": 65536, 00:07:19.678 "uuid": "c34fbfdd-8d90-447d-a5c2-11dd0e58e149", 00:07:19.678 "assigned_rate_limits": { 00:07:19.678 "rw_ios_per_sec": 0, 00:07:19.678 "rw_mbytes_per_sec": 0, 00:07:19.678 "r_mbytes_per_sec": 0, 00:07:19.678 "w_mbytes_per_sec": 0 00:07:19.678 }, 00:07:19.678 "claimed": true, 00:07:19.678 "claim_type": "exclusive_write", 00:07:19.678 "zoned": false, 00:07:19.678 "supported_io_types": { 00:07:19.678 "read": true, 00:07:19.678 "write": true, 00:07:19.678 "unmap": true, 00:07:19.678 "flush": true, 00:07:19.678 "reset": true, 00:07:19.678 "nvme_admin": false, 00:07:19.678 "nvme_io": false, 00:07:19.678 "nvme_io_md": false, 00:07:19.678 "write_zeroes": true, 00:07:19.678 "zcopy": true, 00:07:19.678 "get_zone_info": false, 00:07:19.678 "zone_management": false, 00:07:19.678 "zone_append": false, 00:07:19.678 "compare": false, 00:07:19.678 "compare_and_write": false, 00:07:19.678 "abort": true, 00:07:19.678 "seek_hole": false, 00:07:19.678 "seek_data": false, 00:07:19.678 "copy": true, 00:07:19.678 "nvme_iov_md": false 00:07:19.678 }, 00:07:19.678 "memory_domains": [ 00:07:19.678 { 00:07:19.678 "dma_device_id": "system", 00:07:19.678 "dma_device_type": 1 00:07:19.678 }, 00:07:19.678 { 00:07:19.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.678 "dma_device_type": 2 00:07:19.678 } 00:07:19.678 ], 00:07:19.678 "driver_specific": {} 00:07:19.678 } 00:07:19.678 ] 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.678 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.679 "name": "Existed_Raid", 00:07:19.679 "uuid": "d1fa9401-30b5-4652-8a6d-e728f83e0882", 00:07:19.679 "strip_size_kb": 64, 00:07:19.679 "state": "online", 00:07:19.679 "raid_level": "raid0", 00:07:19.679 "superblock": false, 00:07:19.679 "num_base_bdevs": 2, 00:07:19.679 "num_base_bdevs_discovered": 2, 00:07:19.679 "num_base_bdevs_operational": 2, 00:07:19.679 "base_bdevs_list": [ 00:07:19.679 { 00:07:19.679 "name": "BaseBdev1", 00:07:19.679 "uuid": "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03", 00:07:19.679 "is_configured": true, 00:07:19.679 "data_offset": 0, 00:07:19.679 "data_size": 65536 00:07:19.679 }, 00:07:19.679 { 00:07:19.679 "name": "BaseBdev2", 00:07:19.679 "uuid": "c34fbfdd-8d90-447d-a5c2-11dd0e58e149", 00:07:19.679 "is_configured": true, 00:07:19.679 "data_offset": 0, 00:07:19.679 "data_size": 65536 00:07:19.679 } 00:07:19.679 ] 00:07:19.679 }' 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.679 22:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.260 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.261 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.261 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 [2024-11-26 22:51:59.123372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.261 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.261 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.261 "name": "Existed_Raid", 00:07:20.261 "aliases": [ 00:07:20.261 "d1fa9401-30b5-4652-8a6d-e728f83e0882" 00:07:20.261 ], 00:07:20.261 "product_name": "Raid Volume", 00:07:20.261 "block_size": 512, 00:07:20.261 "num_blocks": 131072, 00:07:20.261 "uuid": "d1fa9401-30b5-4652-8a6d-e728f83e0882", 00:07:20.261 "assigned_rate_limits": { 00:07:20.261 "rw_ios_per_sec": 0, 00:07:20.261 "rw_mbytes_per_sec": 0, 00:07:20.261 "r_mbytes_per_sec": 0, 00:07:20.261 "w_mbytes_per_sec": 0 00:07:20.261 }, 00:07:20.261 "claimed": false, 00:07:20.261 "zoned": false, 00:07:20.261 "supported_io_types": { 00:07:20.261 "read": true, 00:07:20.261 "write": true, 00:07:20.261 "unmap": true, 00:07:20.261 "flush": true, 00:07:20.261 "reset": true, 00:07:20.261 "nvme_admin": false, 00:07:20.261 "nvme_io": false, 00:07:20.261 "nvme_io_md": false, 00:07:20.261 "write_zeroes": true, 00:07:20.261 "zcopy": false, 00:07:20.261 "get_zone_info": false, 00:07:20.261 "zone_management": false, 00:07:20.261 "zone_append": false, 00:07:20.261 "compare": false, 00:07:20.261 "compare_and_write": false, 00:07:20.261 "abort": false, 00:07:20.262 "seek_hole": false, 00:07:20.262 "seek_data": false, 00:07:20.262 "copy": false, 00:07:20.262 "nvme_iov_md": false 00:07:20.262 }, 00:07:20.262 "memory_domains": [ 00:07:20.262 { 00:07:20.262 "dma_device_id": "system", 00:07:20.262 "dma_device_type": 1 00:07:20.262 }, 00:07:20.262 { 00:07:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.262 "dma_device_type": 2 00:07:20.262 }, 00:07:20.262 { 00:07:20.262 "dma_device_id": "system", 00:07:20.262 "dma_device_type": 1 00:07:20.262 }, 00:07:20.262 { 00:07:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.262 "dma_device_type": 2 00:07:20.262 } 00:07:20.262 ], 00:07:20.262 "driver_specific": { 00:07:20.262 "raid": { 00:07:20.262 "uuid": "d1fa9401-30b5-4652-8a6d-e728f83e0882", 00:07:20.262 "strip_size_kb": 64, 00:07:20.262 "state": "online", 00:07:20.262 "raid_level": "raid0", 00:07:20.262 "superblock": false, 00:07:20.262 "num_base_bdevs": 2, 00:07:20.262 "num_base_bdevs_discovered": 2, 00:07:20.262 "num_base_bdevs_operational": 2, 00:07:20.262 "base_bdevs_list": [ 00:07:20.262 { 00:07:20.262 "name": "BaseBdev1", 00:07:20.262 "uuid": "40c71ba5-1ebd-4ea7-a752-a2ba5cfe7e03", 00:07:20.262 "is_configured": true, 00:07:20.262 "data_offset": 0, 00:07:20.262 "data_size": 65536 00:07:20.262 }, 00:07:20.262 { 00:07:20.263 "name": "BaseBdev2", 00:07:20.263 "uuid": "c34fbfdd-8d90-447d-a5c2-11dd0e58e149", 00:07:20.263 "is_configured": true, 00:07:20.263 "data_offset": 0, 00:07:20.263 "data_size": 65536 00:07:20.263 } 00:07:20.263 ] 00:07:20.263 } 00:07:20.263 } 00:07:20.263 }' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:20.263 BaseBdev2' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.263 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.263 [2024-11-26 22:51:59.367109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.263 [2024-11-26 22:51:59.367140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.263 [2024-11-26 22:51:59.367199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.529 "name": "Existed_Raid", 00:07:20.529 "uuid": "d1fa9401-30b5-4652-8a6d-e728f83e0882", 00:07:20.529 "strip_size_kb": 64, 00:07:20.529 "state": "offline", 00:07:20.529 "raid_level": "raid0", 00:07:20.529 "superblock": false, 00:07:20.529 "num_base_bdevs": 2, 00:07:20.529 "num_base_bdevs_discovered": 1, 00:07:20.529 "num_base_bdevs_operational": 1, 00:07:20.529 "base_bdevs_list": [ 00:07:20.529 { 00:07:20.529 "name": null, 00:07:20.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.529 "is_configured": false, 00:07:20.529 "data_offset": 0, 00:07:20.529 "data_size": 65536 00:07:20.529 }, 00:07:20.529 { 00:07:20.529 "name": "BaseBdev2", 00:07:20.529 "uuid": "c34fbfdd-8d90-447d-a5c2-11dd0e58e149", 00:07:20.529 "is_configured": true, 00:07:20.529 "data_offset": 0, 00:07:20.529 "data_size": 65536 00:07:20.529 } 00:07:20.529 ] 00:07:20.529 }' 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.529 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 [2024-11-26 22:51:59.872062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:20.789 [2024-11-26 22:51:59.872190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.049 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73722 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73722 ']' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73722 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73722 00:07:21.050 killing process with pid 73722 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73722' 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73722 00:07:21.050 [2024-11-26 22:51:59.979000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.050 22:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73722 00:07:21.050 [2024-11-26 22:51:59.980575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.311 00:07:21.311 real 0m3.938s 00:07:21.311 user 0m6.063s 00:07:21.311 sys 0m0.823s 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.311 ************************************ 00:07:21.311 END TEST raid_state_function_test 00:07:21.311 ************************************ 00:07:21.311 22:52:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:21.311 22:52:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.311 22:52:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.311 22:52:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.311 ************************************ 00:07:21.311 START TEST raid_state_function_test_sb 00:07:21.311 ************************************ 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73964 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.311 Process raid pid: 73964 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73964' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73964 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73964 ']' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.311 22:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.572 [2024-11-26 22:52:00.497201] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:21.572 [2024-11-26 22:52:00.497338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.572 [2024-11-26 22:52:00.635283] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.572 [2024-11-26 22:52:00.672678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.831 [2024-11-26 22:52:00.712169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.831 [2024-11-26 22:52:00.788811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.831 [2024-11-26 22:52:00.788854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.400 [2024-11-26 22:52:01.316470] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.400 [2024-11-26 22:52:01.316525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.400 [2024-11-26 22:52:01.316546] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.400 [2024-11-26 22:52:01.316555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.400 "name": "Existed_Raid", 00:07:22.400 "uuid": "4bd174e5-e142-41d9-9b60-83e9662abb47", 00:07:22.400 "strip_size_kb": 64, 00:07:22.400 "state": "configuring", 00:07:22.400 "raid_level": "raid0", 00:07:22.400 "superblock": true, 00:07:22.400 "num_base_bdevs": 2, 00:07:22.400 "num_base_bdevs_discovered": 0, 00:07:22.400 "num_base_bdevs_operational": 2, 00:07:22.400 "base_bdevs_list": [ 00:07:22.400 { 00:07:22.400 "name": "BaseBdev1", 00:07:22.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.400 "is_configured": false, 00:07:22.400 "data_offset": 0, 00:07:22.400 "data_size": 0 00:07:22.400 }, 00:07:22.400 { 00:07:22.400 "name": "BaseBdev2", 00:07:22.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.400 "is_configured": false, 00:07:22.400 "data_offset": 0, 00:07:22.400 "data_size": 0 00:07:22.400 } 00:07:22.400 ] 00:07:22.400 }' 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.400 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.659 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.659 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.659 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.659 [2024-11-26 22:52:01.780483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.659 [2024-11-26 22:52:01.780533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.919 [2024-11-26 22:52:01.792513] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.919 [2024-11-26 22:52:01.792557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.919 [2024-11-26 22:52:01.792568] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.919 [2024-11-26 22:52:01.792578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.919 [2024-11-26 22:52:01.819642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.919 BaseBdev1 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.919 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.919 [ 00:07:22.919 { 00:07:22.919 "name": "BaseBdev1", 00:07:22.919 "aliases": [ 00:07:22.919 "b7fa99bb-d01a-429f-8c02-fc3cb462a291" 00:07:22.919 ], 00:07:22.919 "product_name": "Malloc disk", 00:07:22.919 "block_size": 512, 00:07:22.919 "num_blocks": 65536, 00:07:22.919 "uuid": "b7fa99bb-d01a-429f-8c02-fc3cb462a291", 00:07:22.919 "assigned_rate_limits": { 00:07:22.919 "rw_ios_per_sec": 0, 00:07:22.919 "rw_mbytes_per_sec": 0, 00:07:22.920 "r_mbytes_per_sec": 0, 00:07:22.920 "w_mbytes_per_sec": 0 00:07:22.920 }, 00:07:22.920 "claimed": true, 00:07:22.920 "claim_type": "exclusive_write", 00:07:22.920 "zoned": false, 00:07:22.920 "supported_io_types": { 00:07:22.920 "read": true, 00:07:22.920 "write": true, 00:07:22.920 "unmap": true, 00:07:22.920 "flush": true, 00:07:22.920 "reset": true, 00:07:22.920 "nvme_admin": false, 00:07:22.920 "nvme_io": false, 00:07:22.920 "nvme_io_md": false, 00:07:22.920 "write_zeroes": true, 00:07:22.920 "zcopy": true, 00:07:22.920 "get_zone_info": false, 00:07:22.920 "zone_management": false, 00:07:22.920 "zone_append": false, 00:07:22.920 "compare": false, 00:07:22.920 "compare_and_write": false, 00:07:22.920 "abort": true, 00:07:22.920 "seek_hole": false, 00:07:22.920 "seek_data": false, 00:07:22.920 "copy": true, 00:07:22.920 "nvme_iov_md": false 00:07:22.920 }, 00:07:22.920 "memory_domains": [ 00:07:22.920 { 00:07:22.920 "dma_device_id": "system", 00:07:22.920 "dma_device_type": 1 00:07:22.920 }, 00:07:22.920 { 00:07:22.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.920 "dma_device_type": 2 00:07:22.920 } 00:07:22.920 ], 00:07:22.920 "driver_specific": {} 00:07:22.920 } 00:07:22.920 ] 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.920 "name": "Existed_Raid", 00:07:22.920 "uuid": "0471af97-9117-42e9-bb22-2e1023cdba65", 00:07:22.920 "strip_size_kb": 64, 00:07:22.920 "state": "configuring", 00:07:22.920 "raid_level": "raid0", 00:07:22.920 "superblock": true, 00:07:22.920 "num_base_bdevs": 2, 00:07:22.920 "num_base_bdevs_discovered": 1, 00:07:22.920 "num_base_bdevs_operational": 2, 00:07:22.920 "base_bdevs_list": [ 00:07:22.920 { 00:07:22.920 "name": "BaseBdev1", 00:07:22.920 "uuid": "b7fa99bb-d01a-429f-8c02-fc3cb462a291", 00:07:22.920 "is_configured": true, 00:07:22.920 "data_offset": 2048, 00:07:22.920 "data_size": 63488 00:07:22.920 }, 00:07:22.920 { 00:07:22.920 "name": "BaseBdev2", 00:07:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.920 "is_configured": false, 00:07:22.920 "data_offset": 0, 00:07:22.920 "data_size": 0 00:07:22.920 } 00:07:22.920 ] 00:07:22.920 }' 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.920 22:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.180 [2024-11-26 22:52:02.295824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.180 [2024-11-26 22:52:02.295900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.180 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.440 [2024-11-26 22:52:02.307851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.440 [2024-11-26 22:52:02.309980] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.440 [2024-11-26 22:52:02.310022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.440 "name": "Existed_Raid", 00:07:23.440 "uuid": "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0", 00:07:23.440 "strip_size_kb": 64, 00:07:23.440 "state": "configuring", 00:07:23.440 "raid_level": "raid0", 00:07:23.440 "superblock": true, 00:07:23.440 "num_base_bdevs": 2, 00:07:23.440 "num_base_bdevs_discovered": 1, 00:07:23.440 "num_base_bdevs_operational": 2, 00:07:23.440 "base_bdevs_list": [ 00:07:23.440 { 00:07:23.440 "name": "BaseBdev1", 00:07:23.440 "uuid": "b7fa99bb-d01a-429f-8c02-fc3cb462a291", 00:07:23.440 "is_configured": true, 00:07:23.440 "data_offset": 2048, 00:07:23.440 "data_size": 63488 00:07:23.440 }, 00:07:23.440 { 00:07:23.440 "name": "BaseBdev2", 00:07:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.440 "is_configured": false, 00:07:23.440 "data_offset": 0, 00:07:23.440 "data_size": 0 00:07:23.440 } 00:07:23.440 ] 00:07:23.440 }' 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.440 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 [2024-11-26 22:52:02.800843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.700 [2024-11-26 22:52:02.801051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:23.700 [2024-11-26 22:52:02.801072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.700 [2024-11-26 22:52:02.801414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:23.700 [2024-11-26 22:52:02.801575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:23.700 [2024-11-26 22:52:02.801594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:23.700 BaseBdev2 00:07:23.700 [2024-11-26 22:52:02.801729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.700 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.701 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.701 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.701 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.701 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.701 [ 00:07:23.701 { 00:07:23.701 "name": "BaseBdev2", 00:07:23.701 "aliases": [ 00:07:23.960 "658adb11-5d36-4884-979d-06dc992ca744" 00:07:23.960 ], 00:07:23.960 "product_name": "Malloc disk", 00:07:23.960 "block_size": 512, 00:07:23.960 "num_blocks": 65536, 00:07:23.960 "uuid": "658adb11-5d36-4884-979d-06dc992ca744", 00:07:23.960 "assigned_rate_limits": { 00:07:23.960 "rw_ios_per_sec": 0, 00:07:23.960 "rw_mbytes_per_sec": 0, 00:07:23.960 "r_mbytes_per_sec": 0, 00:07:23.960 "w_mbytes_per_sec": 0 00:07:23.960 }, 00:07:23.960 "claimed": true, 00:07:23.960 "claim_type": "exclusive_write", 00:07:23.960 "zoned": false, 00:07:23.960 "supported_io_types": { 00:07:23.960 "read": true, 00:07:23.960 "write": true, 00:07:23.960 "unmap": true, 00:07:23.960 "flush": true, 00:07:23.960 "reset": true, 00:07:23.960 "nvme_admin": false, 00:07:23.960 "nvme_io": false, 00:07:23.960 "nvme_io_md": false, 00:07:23.960 "write_zeroes": true, 00:07:23.960 "zcopy": true, 00:07:23.960 "get_zone_info": false, 00:07:23.960 "zone_management": false, 00:07:23.961 "zone_append": false, 00:07:23.961 "compare": false, 00:07:23.961 "compare_and_write": false, 00:07:23.961 "abort": true, 00:07:23.961 "seek_hole": false, 00:07:23.961 "seek_data": false, 00:07:23.961 "copy": true, 00:07:23.961 "nvme_iov_md": false 00:07:23.961 }, 00:07:23.961 "memory_domains": [ 00:07:23.961 { 00:07:23.961 "dma_device_id": "system", 00:07:23.961 "dma_device_type": 1 00:07:23.961 }, 00:07:23.961 { 00:07:23.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.961 "dma_device_type": 2 00:07:23.961 } 00:07:23.961 ], 00:07:23.961 "driver_specific": {} 00:07:23.961 } 00:07:23.961 ] 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.961 "name": "Existed_Raid", 00:07:23.961 "uuid": "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0", 00:07:23.961 "strip_size_kb": 64, 00:07:23.961 "state": "online", 00:07:23.961 "raid_level": "raid0", 00:07:23.961 "superblock": true, 00:07:23.961 "num_base_bdevs": 2, 00:07:23.961 "num_base_bdevs_discovered": 2, 00:07:23.961 "num_base_bdevs_operational": 2, 00:07:23.961 "base_bdevs_list": [ 00:07:23.961 { 00:07:23.961 "name": "BaseBdev1", 00:07:23.961 "uuid": "b7fa99bb-d01a-429f-8c02-fc3cb462a291", 00:07:23.961 "is_configured": true, 00:07:23.961 "data_offset": 2048, 00:07:23.961 "data_size": 63488 00:07:23.961 }, 00:07:23.961 { 00:07:23.961 "name": "BaseBdev2", 00:07:23.961 "uuid": "658adb11-5d36-4884-979d-06dc992ca744", 00:07:23.961 "is_configured": true, 00:07:23.961 "data_offset": 2048, 00:07:23.961 "data_size": 63488 00:07:23.961 } 00:07:23.961 ] 00:07:23.961 }' 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.961 22:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.221 [2024-11-26 22:52:03.293390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.221 "name": "Existed_Raid", 00:07:24.221 "aliases": [ 00:07:24.221 "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0" 00:07:24.221 ], 00:07:24.221 "product_name": "Raid Volume", 00:07:24.221 "block_size": 512, 00:07:24.221 "num_blocks": 126976, 00:07:24.221 "uuid": "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0", 00:07:24.221 "assigned_rate_limits": { 00:07:24.221 "rw_ios_per_sec": 0, 00:07:24.221 "rw_mbytes_per_sec": 0, 00:07:24.221 "r_mbytes_per_sec": 0, 00:07:24.221 "w_mbytes_per_sec": 0 00:07:24.221 }, 00:07:24.221 "claimed": false, 00:07:24.221 "zoned": false, 00:07:24.221 "supported_io_types": { 00:07:24.221 "read": true, 00:07:24.221 "write": true, 00:07:24.221 "unmap": true, 00:07:24.221 "flush": true, 00:07:24.221 "reset": true, 00:07:24.221 "nvme_admin": false, 00:07:24.221 "nvme_io": false, 00:07:24.221 "nvme_io_md": false, 00:07:24.221 "write_zeroes": true, 00:07:24.221 "zcopy": false, 00:07:24.221 "get_zone_info": false, 00:07:24.221 "zone_management": false, 00:07:24.221 "zone_append": false, 00:07:24.221 "compare": false, 00:07:24.221 "compare_and_write": false, 00:07:24.221 "abort": false, 00:07:24.221 "seek_hole": false, 00:07:24.221 "seek_data": false, 00:07:24.221 "copy": false, 00:07:24.221 "nvme_iov_md": false 00:07:24.221 }, 00:07:24.221 "memory_domains": [ 00:07:24.221 { 00:07:24.221 "dma_device_id": "system", 00:07:24.221 "dma_device_type": 1 00:07:24.221 }, 00:07:24.221 { 00:07:24.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.221 "dma_device_type": 2 00:07:24.221 }, 00:07:24.221 { 00:07:24.221 "dma_device_id": "system", 00:07:24.221 "dma_device_type": 1 00:07:24.221 }, 00:07:24.221 { 00:07:24.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.221 "dma_device_type": 2 00:07:24.221 } 00:07:24.221 ], 00:07:24.221 "driver_specific": { 00:07:24.221 "raid": { 00:07:24.221 "uuid": "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0", 00:07:24.221 "strip_size_kb": 64, 00:07:24.221 "state": "online", 00:07:24.221 "raid_level": "raid0", 00:07:24.221 "superblock": true, 00:07:24.221 "num_base_bdevs": 2, 00:07:24.221 "num_base_bdevs_discovered": 2, 00:07:24.221 "num_base_bdevs_operational": 2, 00:07:24.221 "base_bdevs_list": [ 00:07:24.221 { 00:07:24.221 "name": "BaseBdev1", 00:07:24.221 "uuid": "b7fa99bb-d01a-429f-8c02-fc3cb462a291", 00:07:24.221 "is_configured": true, 00:07:24.221 "data_offset": 2048, 00:07:24.221 "data_size": 63488 00:07:24.221 }, 00:07:24.221 { 00:07:24.221 "name": "BaseBdev2", 00:07:24.221 "uuid": "658adb11-5d36-4884-979d-06dc992ca744", 00:07:24.221 "is_configured": true, 00:07:24.221 "data_offset": 2048, 00:07:24.221 "data_size": 63488 00:07:24.221 } 00:07:24.221 ] 00:07:24.221 } 00:07:24.221 } 00:07:24.221 }' 00:07:24.221 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.481 BaseBdev2' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.481 [2024-11-26 22:52:03.541114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.481 [2024-11-26 22:52:03.541150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.481 [2024-11-26 22:52:03.541213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.481 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.740 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.740 "name": "Existed_Raid", 00:07:24.740 "uuid": "6e583d5c-7aa5-4dcd-8f30-35adaa6969f0", 00:07:24.740 "strip_size_kb": 64, 00:07:24.740 "state": "offline", 00:07:24.740 "raid_level": "raid0", 00:07:24.740 "superblock": true, 00:07:24.740 "num_base_bdevs": 2, 00:07:24.740 "num_base_bdevs_discovered": 1, 00:07:24.740 "num_base_bdevs_operational": 1, 00:07:24.740 "base_bdevs_list": [ 00:07:24.740 { 00:07:24.740 "name": null, 00:07:24.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.741 "is_configured": false, 00:07:24.741 "data_offset": 0, 00:07:24.741 "data_size": 63488 00:07:24.741 }, 00:07:24.741 { 00:07:24.741 "name": "BaseBdev2", 00:07:24.741 "uuid": "658adb11-5d36-4884-979d-06dc992ca744", 00:07:24.741 "is_configured": true, 00:07:24.741 "data_offset": 2048, 00:07:24.741 "data_size": 63488 00:07:24.741 } 00:07:24.741 ] 00:07:24.741 }' 00:07:24.741 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.741 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 22:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 [2024-11-26 22:52:04.025560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.001 [2024-11-26 22:52:04.025642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73964 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73964 ']' 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73964 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:25.001 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.002 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73964 00:07:25.261 killing process with pid 73964 00:07:25.261 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.261 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.261 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73964' 00:07:25.261 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73964 00:07:25.261 [2024-11-26 22:52:04.146469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.261 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73964 00:07:25.261 [2024-11-26 22:52:04.148030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.521 22:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.521 00:07:25.521 real 0m4.087s 00:07:25.521 user 0m6.272s 00:07:25.521 sys 0m0.912s 00:07:25.521 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.522 22:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.522 ************************************ 00:07:25.522 END TEST raid_state_function_test_sb 00:07:25.522 ************************************ 00:07:25.522 22:52:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:25.522 22:52:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:25.522 22:52:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.522 22:52:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.522 ************************************ 00:07:25.522 START TEST raid_superblock_test 00:07:25.522 ************************************ 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74204 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74204 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74204 ']' 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.522 22:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.522 [2024-11-26 22:52:04.637380] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:25.522 [2024-11-26 22:52:04.637488] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74204 ] 00:07:25.783 [2024-11-26 22:52:04.772733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.783 [2024-11-26 22:52:04.795775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.783 [2024-11-26 22:52:04.840728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.043 [2024-11-26 22:52:04.921812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.043 [2024-11-26 22:52:04.921861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.613 malloc1 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.613 [2024-11-26 22:52:05.476053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:26.613 [2024-11-26 22:52:05.476124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.613 [2024-11-26 22:52:05.476147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:26.613 [2024-11-26 22:52:05.476156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.613 [2024-11-26 22:52:05.478170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.613 [2024-11-26 22:52:05.478209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:26.613 pt1 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.613 malloc2 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.613 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.613 [2024-11-26 22:52:05.504624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.613 [2024-11-26 22:52:05.504675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.613 [2024-11-26 22:52:05.504693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:26.614 [2024-11-26 22:52:05.504701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.614 [2024-11-26 22:52:05.506635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.614 [2024-11-26 22:52:05.506671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.614 pt2 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.614 [2024-11-26 22:52:05.516648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:26.614 [2024-11-26 22:52:05.518333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.614 [2024-11-26 22:52:05.518462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:26.614 [2024-11-26 22:52:05.518479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.614 [2024-11-26 22:52:05.518715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:26.614 [2024-11-26 22:52:05.518844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:26.614 [2024-11-26 22:52:05.518856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:26.614 [2024-11-26 22:52:05.518967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.614 "name": "raid_bdev1", 00:07:26.614 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:26.614 "strip_size_kb": 64, 00:07:26.614 "state": "online", 00:07:26.614 "raid_level": "raid0", 00:07:26.614 "superblock": true, 00:07:26.614 "num_base_bdevs": 2, 00:07:26.614 "num_base_bdevs_discovered": 2, 00:07:26.614 "num_base_bdevs_operational": 2, 00:07:26.614 "base_bdevs_list": [ 00:07:26.614 { 00:07:26.614 "name": "pt1", 00:07:26.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.614 "is_configured": true, 00:07:26.614 "data_offset": 2048, 00:07:26.614 "data_size": 63488 00:07:26.614 }, 00:07:26.614 { 00:07:26.614 "name": "pt2", 00:07:26.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.614 "is_configured": true, 00:07:26.614 "data_offset": 2048, 00:07:26.614 "data_size": 63488 00:07:26.614 } 00:07:26.614 ] 00:07:26.614 }' 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.614 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.875 [2024-11-26 22:52:05.965042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.875 22:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.875 "name": "raid_bdev1", 00:07:26.875 "aliases": [ 00:07:26.875 "a7717861-7b67-451e-8605-35f29f0fda8b" 00:07:26.875 ], 00:07:26.875 "product_name": "Raid Volume", 00:07:26.875 "block_size": 512, 00:07:26.875 "num_blocks": 126976, 00:07:26.875 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:26.875 "assigned_rate_limits": { 00:07:26.875 "rw_ios_per_sec": 0, 00:07:26.875 "rw_mbytes_per_sec": 0, 00:07:26.875 "r_mbytes_per_sec": 0, 00:07:26.875 "w_mbytes_per_sec": 0 00:07:26.875 }, 00:07:26.875 "claimed": false, 00:07:26.875 "zoned": false, 00:07:26.875 "supported_io_types": { 00:07:26.875 "read": true, 00:07:26.875 "write": true, 00:07:26.875 "unmap": true, 00:07:26.875 "flush": true, 00:07:26.875 "reset": true, 00:07:26.875 "nvme_admin": false, 00:07:26.875 "nvme_io": false, 00:07:26.875 "nvme_io_md": false, 00:07:26.875 "write_zeroes": true, 00:07:26.875 "zcopy": false, 00:07:26.875 "get_zone_info": false, 00:07:26.875 "zone_management": false, 00:07:26.875 "zone_append": false, 00:07:26.875 "compare": false, 00:07:26.875 "compare_and_write": false, 00:07:26.875 "abort": false, 00:07:26.875 "seek_hole": false, 00:07:26.875 "seek_data": false, 00:07:26.875 "copy": false, 00:07:26.875 "nvme_iov_md": false 00:07:26.875 }, 00:07:26.875 "memory_domains": [ 00:07:26.875 { 00:07:26.875 "dma_device_id": "system", 00:07:26.875 "dma_device_type": 1 00:07:26.875 }, 00:07:26.875 { 00:07:26.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.875 "dma_device_type": 2 00:07:26.875 }, 00:07:26.875 { 00:07:26.875 "dma_device_id": "system", 00:07:26.875 "dma_device_type": 1 00:07:26.875 }, 00:07:26.875 { 00:07:26.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.875 "dma_device_type": 2 00:07:26.875 } 00:07:26.875 ], 00:07:26.875 "driver_specific": { 00:07:26.875 "raid": { 00:07:26.875 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:26.875 "strip_size_kb": 64, 00:07:26.875 "state": "online", 00:07:26.875 "raid_level": "raid0", 00:07:26.875 "superblock": true, 00:07:26.875 "num_base_bdevs": 2, 00:07:26.875 "num_base_bdevs_discovered": 2, 00:07:26.875 "num_base_bdevs_operational": 2, 00:07:26.875 "base_bdevs_list": [ 00:07:26.875 { 00:07:26.875 "name": "pt1", 00:07:26.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.875 "is_configured": true, 00:07:26.875 "data_offset": 2048, 00:07:26.875 "data_size": 63488 00:07:26.875 }, 00:07:26.875 { 00:07:26.875 "name": "pt2", 00:07:26.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.875 "is_configured": true, 00:07:26.875 "data_offset": 2048, 00:07:26.875 "data_size": 63488 00:07:26.875 } 00:07:26.875 ] 00:07:26.875 } 00:07:26.875 } 00:07:26.875 }' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.136 pt2' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 [2024-11-26 22:52:06.181008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7717861-7b67-451e-8605-35f29f0fda8b 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7717861-7b67-451e-8605-35f29f0fda8b ']' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 [2024-11-26 22:52:06.224799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.136 [2024-11-26 22:52:06.224832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.136 [2024-11-26 22:52:06.224907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.136 [2024-11-26 22:52:06.224951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.136 [2024-11-26 22:52:06.224966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:27.136 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.399 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.399 [2024-11-26 22:52:06.332909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:27.399 [2024-11-26 22:52:06.334700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:27.399 [2024-11-26 22:52:06.334764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:27.399 [2024-11-26 22:52:06.334816] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:27.400 [2024-11-26 22:52:06.334832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.400 [2024-11-26 22:52:06.334843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:27.400 request: 00:07:27.400 { 00:07:27.400 "name": "raid_bdev1", 00:07:27.400 "raid_level": "raid0", 00:07:27.400 "base_bdevs": [ 00:07:27.400 "malloc1", 00:07:27.400 "malloc2" 00:07:27.400 ], 00:07:27.400 "strip_size_kb": 64, 00:07:27.400 "superblock": false, 00:07:27.400 "method": "bdev_raid_create", 00:07:27.400 "req_id": 1 00:07:27.400 } 00:07:27.400 Got JSON-RPC error response 00:07:27.400 response: 00:07:27.400 { 00:07:27.400 "code": -17, 00:07:27.400 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:27.400 } 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.400 [2024-11-26 22:52:06.396898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.400 [2024-11-26 22:52:06.396945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.400 [2024-11-26 22:52:06.396976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:27.400 [2024-11-26 22:52:06.396988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.400 [2024-11-26 22:52:06.399016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.400 [2024-11-26 22:52:06.399055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.400 [2024-11-26 22:52:06.399116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.400 [2024-11-26 22:52:06.399151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.400 pt1 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.400 "name": "raid_bdev1", 00:07:27.400 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:27.400 "strip_size_kb": 64, 00:07:27.400 "state": "configuring", 00:07:27.400 "raid_level": "raid0", 00:07:27.400 "superblock": true, 00:07:27.400 "num_base_bdevs": 2, 00:07:27.400 "num_base_bdevs_discovered": 1, 00:07:27.400 "num_base_bdevs_operational": 2, 00:07:27.400 "base_bdevs_list": [ 00:07:27.400 { 00:07:27.400 "name": "pt1", 00:07:27.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.400 "is_configured": true, 00:07:27.400 "data_offset": 2048, 00:07:27.400 "data_size": 63488 00:07:27.400 }, 00:07:27.400 { 00:07:27.400 "name": null, 00:07:27.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.400 "is_configured": false, 00:07:27.400 "data_offset": 2048, 00:07:27.400 "data_size": 63488 00:07:27.400 } 00:07:27.400 ] 00:07:27.400 }' 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.400 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 [2024-11-26 22:52:06.821061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:28.008 [2024-11-26 22:52:06.821155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.008 [2024-11-26 22:52:06.821178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:28.008 [2024-11-26 22:52:06.821189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.008 [2024-11-26 22:52:06.821628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.008 [2024-11-26 22:52:06.821659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:28.008 [2024-11-26 22:52:06.821738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:28.008 [2024-11-26 22:52:06.821764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.008 [2024-11-26 22:52:06.821848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:28.008 [2024-11-26 22:52:06.821860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.008 [2024-11-26 22:52:06.822098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.008 [2024-11-26 22:52:06.822230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:28.008 [2024-11-26 22:52:06.822244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:28.008 [2024-11-26 22:52:06.822369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.008 pt2 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.008 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.009 "name": "raid_bdev1", 00:07:28.009 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:28.009 "strip_size_kb": 64, 00:07:28.009 "state": "online", 00:07:28.009 "raid_level": "raid0", 00:07:28.009 "superblock": true, 00:07:28.009 "num_base_bdevs": 2, 00:07:28.009 "num_base_bdevs_discovered": 2, 00:07:28.009 "num_base_bdevs_operational": 2, 00:07:28.009 "base_bdevs_list": [ 00:07:28.009 { 00:07:28.009 "name": "pt1", 00:07:28.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.009 "is_configured": true, 00:07:28.009 "data_offset": 2048, 00:07:28.009 "data_size": 63488 00:07:28.009 }, 00:07:28.009 { 00:07:28.009 "name": "pt2", 00:07:28.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.009 "is_configured": true, 00:07:28.009 "data_offset": 2048, 00:07:28.009 "data_size": 63488 00:07:28.009 } 00:07:28.009 ] 00:07:28.009 }' 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.009 22:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 [2024-11-26 22:52:07.285395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.267 "name": "raid_bdev1", 00:07:28.267 "aliases": [ 00:07:28.267 "a7717861-7b67-451e-8605-35f29f0fda8b" 00:07:28.267 ], 00:07:28.267 "product_name": "Raid Volume", 00:07:28.267 "block_size": 512, 00:07:28.267 "num_blocks": 126976, 00:07:28.267 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:28.267 "assigned_rate_limits": { 00:07:28.267 "rw_ios_per_sec": 0, 00:07:28.267 "rw_mbytes_per_sec": 0, 00:07:28.267 "r_mbytes_per_sec": 0, 00:07:28.267 "w_mbytes_per_sec": 0 00:07:28.267 }, 00:07:28.267 "claimed": false, 00:07:28.267 "zoned": false, 00:07:28.267 "supported_io_types": { 00:07:28.267 "read": true, 00:07:28.267 "write": true, 00:07:28.267 "unmap": true, 00:07:28.267 "flush": true, 00:07:28.267 "reset": true, 00:07:28.267 "nvme_admin": false, 00:07:28.267 "nvme_io": false, 00:07:28.267 "nvme_io_md": false, 00:07:28.267 "write_zeroes": true, 00:07:28.267 "zcopy": false, 00:07:28.267 "get_zone_info": false, 00:07:28.267 "zone_management": false, 00:07:28.267 "zone_append": false, 00:07:28.267 "compare": false, 00:07:28.267 "compare_and_write": false, 00:07:28.267 "abort": false, 00:07:28.267 "seek_hole": false, 00:07:28.267 "seek_data": false, 00:07:28.267 "copy": false, 00:07:28.267 "nvme_iov_md": false 00:07:28.267 }, 00:07:28.267 "memory_domains": [ 00:07:28.267 { 00:07:28.267 "dma_device_id": "system", 00:07:28.267 "dma_device_type": 1 00:07:28.267 }, 00:07:28.267 { 00:07:28.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.267 "dma_device_type": 2 00:07:28.267 }, 00:07:28.267 { 00:07:28.267 "dma_device_id": "system", 00:07:28.267 "dma_device_type": 1 00:07:28.267 }, 00:07:28.267 { 00:07:28.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.267 "dma_device_type": 2 00:07:28.267 } 00:07:28.267 ], 00:07:28.267 "driver_specific": { 00:07:28.267 "raid": { 00:07:28.267 "uuid": "a7717861-7b67-451e-8605-35f29f0fda8b", 00:07:28.267 "strip_size_kb": 64, 00:07:28.267 "state": "online", 00:07:28.267 "raid_level": "raid0", 00:07:28.267 "superblock": true, 00:07:28.267 "num_base_bdevs": 2, 00:07:28.267 "num_base_bdevs_discovered": 2, 00:07:28.267 "num_base_bdevs_operational": 2, 00:07:28.267 "base_bdevs_list": [ 00:07:28.267 { 00:07:28.267 "name": "pt1", 00:07:28.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.267 "is_configured": true, 00:07:28.267 "data_offset": 2048, 00:07:28.267 "data_size": 63488 00:07:28.267 }, 00:07:28.267 { 00:07:28.267 "name": "pt2", 00:07:28.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.267 "is_configured": true, 00:07:28.267 "data_offset": 2048, 00:07:28.267 "data_size": 63488 00:07:28.267 } 00:07:28.267 ] 00:07:28.267 } 00:07:28.267 } 00:07:28.267 }' 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.267 pt2' 00:07:28.267 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.526 [2024-11-26 22:52:07.517423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7717861-7b67-451e-8605-35f29f0fda8b '!=' a7717861-7b67-451e-8605-35f29f0fda8b ']' 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:28.526 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74204 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74204 ']' 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74204 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74204 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.527 killing process with pid 74204 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74204' 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74204 00:07:28.527 [2024-11-26 22:52:07.573550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.527 [2024-11-26 22:52:07.573638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.527 [2024-11-26 22:52:07.573682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.527 [2024-11-26 22:52:07.573694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:28.527 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74204 00:07:28.527 [2024-11-26 22:52:07.596584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.786 22:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.787 00:07:28.787 real 0m3.264s 00:07:28.787 user 0m5.008s 00:07:28.787 sys 0m0.741s 00:07:28.787 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.787 22:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.787 ************************************ 00:07:28.787 END TEST raid_superblock_test 00:07:28.787 ************************************ 00:07:28.787 22:52:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:28.787 22:52:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.787 22:52:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.787 22:52:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.787 ************************************ 00:07:28.787 START TEST raid_read_error_test 00:07:28.787 ************************************ 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.R2D7At7lzu 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74406 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74406 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74406 ']' 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.787 22:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.047 [2024-11-26 22:52:07.990733] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:29.047 [2024-11-26 22:52:07.990880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ] 00:07:29.047 [2024-11-26 22:52:08.129217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.047 [2024-11-26 22:52:08.169321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.306 [2024-11-26 22:52:08.195320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.306 [2024-11-26 22:52:08.238432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.306 [2024-11-26 22:52:08.238480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 BaseBdev1_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 true 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 [2024-11-26 22:52:08.834515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.873 [2024-11-26 22:52:08.834579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.873 [2024-11-26 22:52:08.834612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:29.873 [2024-11-26 22:52:08.834625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.873 [2024-11-26 22:52:08.836714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.873 [2024-11-26 22:52:08.836752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.873 BaseBdev1 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 BaseBdev2_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 true 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 [2024-11-26 22:52:08.875126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.873 [2024-11-26 22:52:08.875179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.873 [2024-11-26 22:52:08.875210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:29.873 [2024-11-26 22:52:08.875222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.873 [2024-11-26 22:52:08.877204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.873 [2024-11-26 22:52:08.877241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.873 BaseBdev2 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.873 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.873 [2024-11-26 22:52:08.887163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.873 [2024-11-26 22:52:08.888904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.873 [2024-11-26 22:52:08.889056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:29.873 [2024-11-26 22:52:08.889070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.873 [2024-11-26 22:52:08.889297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:29.873 [2024-11-26 22:52:08.889440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:29.873 [2024-11-26 22:52:08.889460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:29.874 [2024-11-26 22:52:08.889575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.874 "name": "raid_bdev1", 00:07:29.874 "uuid": "bf5cc754-b209-4da6-a852-80d0be6f685f", 00:07:29.874 "strip_size_kb": 64, 00:07:29.874 "state": "online", 00:07:29.874 "raid_level": "raid0", 00:07:29.874 "superblock": true, 00:07:29.874 "num_base_bdevs": 2, 00:07:29.874 "num_base_bdevs_discovered": 2, 00:07:29.874 "num_base_bdevs_operational": 2, 00:07:29.874 "base_bdevs_list": [ 00:07:29.874 { 00:07:29.874 "name": "BaseBdev1", 00:07:29.874 "uuid": "be59f9b1-c137-588d-9ace-3ce3dc254406", 00:07:29.874 "is_configured": true, 00:07:29.874 "data_offset": 2048, 00:07:29.874 "data_size": 63488 00:07:29.874 }, 00:07:29.874 { 00:07:29.874 "name": "BaseBdev2", 00:07:29.874 "uuid": "fb73ca9c-1df0-5242-82b4-776c185bf70a", 00:07:29.874 "is_configured": true, 00:07:29.874 "data_offset": 2048, 00:07:29.874 "data_size": 63488 00:07:29.874 } 00:07:29.874 ] 00:07:29.874 }' 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.874 22:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.440 22:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.440 22:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.440 [2024-11-26 22:52:09.391688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.378 "name": "raid_bdev1", 00:07:31.378 "uuid": "bf5cc754-b209-4da6-a852-80d0be6f685f", 00:07:31.378 "strip_size_kb": 64, 00:07:31.378 "state": "online", 00:07:31.378 "raid_level": "raid0", 00:07:31.378 "superblock": true, 00:07:31.378 "num_base_bdevs": 2, 00:07:31.378 "num_base_bdevs_discovered": 2, 00:07:31.378 "num_base_bdevs_operational": 2, 00:07:31.378 "base_bdevs_list": [ 00:07:31.378 { 00:07:31.378 "name": "BaseBdev1", 00:07:31.378 "uuid": "be59f9b1-c137-588d-9ace-3ce3dc254406", 00:07:31.378 "is_configured": true, 00:07:31.378 "data_offset": 2048, 00:07:31.378 "data_size": 63488 00:07:31.378 }, 00:07:31.378 { 00:07:31.378 "name": "BaseBdev2", 00:07:31.378 "uuid": "fb73ca9c-1df0-5242-82b4-776c185bf70a", 00:07:31.378 "is_configured": true, 00:07:31.378 "data_offset": 2048, 00:07:31.378 "data_size": 63488 00:07:31.378 } 00:07:31.378 ] 00:07:31.378 }' 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.378 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.637 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.637 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.637 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.637 [2024-11-26 22:52:10.761570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.637 [2024-11-26 22:52:10.761622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.898 [2024-11-26 22:52:10.764221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.898 [2024-11-26 22:52:10.764293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.898 [2024-11-26 22:52:10.764332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.898 [2024-11-26 22:52:10.764345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:31.898 { 00:07:31.898 "results": [ 00:07:31.898 { 00:07:31.898 "job": "raid_bdev1", 00:07:31.898 "core_mask": "0x1", 00:07:31.898 "workload": "randrw", 00:07:31.898 "percentage": 50, 00:07:31.898 "status": "finished", 00:07:31.898 "queue_depth": 1, 00:07:31.898 "io_size": 131072, 00:07:31.898 "runtime": 1.368227, 00:07:31.898 "iops": 17328.995846449456, 00:07:31.898 "mibps": 2166.124480806182, 00:07:31.898 "io_failed": 1, 00:07:31.898 "io_timeout": 0, 00:07:31.898 "avg_latency_us": 79.52778802467404, 00:07:31.898 "min_latency_us": 24.54458293384468, 00:07:31.898 "max_latency_us": 1385.2070077573433 00:07:31.898 } 00:07:31.898 ], 00:07:31.898 "core_count": 1 00:07:31.898 } 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74406 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74406 ']' 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74406 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74406 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.898 killing process with pid 74406 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74406' 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74406 00:07:31.898 [2024-11-26 22:52:10.814635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.898 22:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74406 00:07:31.898 [2024-11-26 22:52:10.830558] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.R2D7At7lzu 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:32.159 00:07:32.159 real 0m3.172s 00:07:32.159 user 0m3.996s 00:07:32.159 sys 0m0.534s 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.159 22:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.159 ************************************ 00:07:32.159 END TEST raid_read_error_test 00:07:32.159 ************************************ 00:07:32.159 22:52:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:32.159 22:52:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.159 22:52:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.159 22:52:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.159 ************************************ 00:07:32.159 START TEST raid_write_error_test 00:07:32.159 ************************************ 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ftWmpFD3qQ 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74535 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74535 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74535 ']' 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.159 22:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.159 [2024-11-26 22:52:11.233116] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:32.159 [2024-11-26 22:52:11.233331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74535 ] 00:07:32.419 [2024-11-26 22:52:11.371631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.419 [2024-11-26 22:52:11.411311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.419 [2024-11-26 22:52:11.436842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.419 [2024-11-26 22:52:11.480487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.419 [2024-11-26 22:52:11.480525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.989 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 BaseBdev1_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 true 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 [2024-11-26 22:52:12.074008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:32.990 [2024-11-26 22:52:12.074166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.990 [2024-11-26 22:52:12.074219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:32.990 [2024-11-26 22:52:12.074261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.990 [2024-11-26 22:52:12.076311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.990 [2024-11-26 22:52:12.076383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:32.990 BaseBdev1 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 BaseBdev2_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 true 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.990 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 [2024-11-26 22:52:12.114600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:32.990 [2024-11-26 22:52:12.114709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.990 [2024-11-26 22:52:12.114741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:32.990 [2024-11-26 22:52:12.114771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.249 [2024-11-26 22:52:12.116782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.249 [2024-11-26 22:52:12.116855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.249 BaseBdev2 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.249 [2024-11-26 22:52:12.126640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.249 [2024-11-26 22:52:12.128426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.249 [2024-11-26 22:52:12.128617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:33.249 [2024-11-26 22:52:12.128655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.249 [2024-11-26 22:52:12.128907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:33.249 [2024-11-26 22:52:12.129103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:33.249 [2024-11-26 22:52:12.129145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:33.249 [2024-11-26 22:52:12.129328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.249 "name": "raid_bdev1", 00:07:33.249 "uuid": "972ef5b4-5296-4eaf-a875-b2996f15e0cc", 00:07:33.249 "strip_size_kb": 64, 00:07:33.249 "state": "online", 00:07:33.249 "raid_level": "raid0", 00:07:33.249 "superblock": true, 00:07:33.249 "num_base_bdevs": 2, 00:07:33.249 "num_base_bdevs_discovered": 2, 00:07:33.249 "num_base_bdevs_operational": 2, 00:07:33.249 "base_bdevs_list": [ 00:07:33.249 { 00:07:33.249 "name": "BaseBdev1", 00:07:33.249 "uuid": "dceef02e-1a45-5d8d-a068-6588789e74e2", 00:07:33.249 "is_configured": true, 00:07:33.249 "data_offset": 2048, 00:07:33.249 "data_size": 63488 00:07:33.249 }, 00:07:33.249 { 00:07:33.249 "name": "BaseBdev2", 00:07:33.249 "uuid": "52db969c-7a4b-5920-9886-e6c85bc0cf61", 00:07:33.249 "is_configured": true, 00:07:33.249 "data_offset": 2048, 00:07:33.249 "data_size": 63488 00:07:33.249 } 00:07:33.249 ] 00:07:33.249 }' 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.249 22:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.508 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.508 22:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.768 [2024-11-26 22:52:12.635105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.708 "name": "raid_bdev1", 00:07:34.708 "uuid": "972ef5b4-5296-4eaf-a875-b2996f15e0cc", 00:07:34.708 "strip_size_kb": 64, 00:07:34.708 "state": "online", 00:07:34.708 "raid_level": "raid0", 00:07:34.708 "superblock": true, 00:07:34.708 "num_base_bdevs": 2, 00:07:34.708 "num_base_bdevs_discovered": 2, 00:07:34.708 "num_base_bdevs_operational": 2, 00:07:34.708 "base_bdevs_list": [ 00:07:34.708 { 00:07:34.708 "name": "BaseBdev1", 00:07:34.708 "uuid": "dceef02e-1a45-5d8d-a068-6588789e74e2", 00:07:34.708 "is_configured": true, 00:07:34.708 "data_offset": 2048, 00:07:34.708 "data_size": 63488 00:07:34.708 }, 00:07:34.708 { 00:07:34.708 "name": "BaseBdev2", 00:07:34.708 "uuid": "52db969c-7a4b-5920-9886-e6c85bc0cf61", 00:07:34.708 "is_configured": true, 00:07:34.708 "data_offset": 2048, 00:07:34.708 "data_size": 63488 00:07:34.708 } 00:07:34.708 ] 00:07:34.708 }' 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.708 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.969 [2024-11-26 22:52:13.989620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.969 [2024-11-26 22:52:13.989748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.969 [2024-11-26 22:52:13.992202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.969 [2024-11-26 22:52:13.992311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.969 [2024-11-26 22:52:13.992361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.969 [2024-11-26 22:52:13.992406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:34.969 { 00:07:34.969 "results": [ 00:07:34.969 { 00:07:34.969 "job": "raid_bdev1", 00:07:34.969 "core_mask": "0x1", 00:07:34.969 "workload": "randrw", 00:07:34.969 "percentage": 50, 00:07:34.969 "status": "finished", 00:07:34.969 "queue_depth": 1, 00:07:34.969 "io_size": 131072, 00:07:34.969 "runtime": 1.3528, 00:07:34.969 "iops": 17423.122412773508, 00:07:34.969 "mibps": 2177.8903015966885, 00:07:34.969 "io_failed": 1, 00:07:34.969 "io_timeout": 0, 00:07:34.969 "avg_latency_us": 79.01442828755114, 00:07:34.969 "min_latency_us": 24.76771550597054, 00:07:34.969 "max_latency_us": 1313.8045846770679 00:07:34.969 } 00:07:34.969 ], 00:07:34.969 "core_count": 1 00:07:34.969 } 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74535 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74535 ']' 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74535 00:07:34.969 22:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74535 00:07:34.969 killing process with pid 74535 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74535' 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74535 00:07:34.969 [2024-11-26 22:52:14.036383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.969 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74535 00:07:34.969 [2024-11-26 22:52:14.051713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ftWmpFD3qQ 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.229 ************************************ 00:07:35.229 END TEST raid_write_error_test 00:07:35.229 ************************************ 00:07:35.229 22:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:35.229 00:07:35.229 real 0m3.150s 00:07:35.229 user 0m3.959s 00:07:35.229 sys 0m0.527s 00:07:35.230 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.230 22:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.230 22:52:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.230 22:52:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:35.230 22:52:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.230 22:52:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.230 22:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.230 ************************************ 00:07:35.230 START TEST raid_state_function_test 00:07:35.230 ************************************ 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.230 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74662 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74662' 00:07:35.490 Process raid pid: 74662 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74662 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74662 ']' 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.490 22:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.490 [2024-11-26 22:52:14.449306] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:35.490 [2024-11-26 22:52:14.449521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.490 [2024-11-26 22:52:14.589192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.750 [2024-11-26 22:52:14.628969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.750 [2024-11-26 22:52:14.654794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.750 [2024-11-26 22:52:14.697803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.750 [2024-11-26 22:52:14.697843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.319 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.319 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.320 [2024-11-26 22:52:15.266551] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.320 [2024-11-26 22:52:15.266660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.320 [2024-11-26 22:52:15.266704] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.320 [2024-11-26 22:52:15.266725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.320 "name": "Existed_Raid", 00:07:36.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.320 "strip_size_kb": 64, 00:07:36.320 "state": "configuring", 00:07:36.320 "raid_level": "concat", 00:07:36.320 "superblock": false, 00:07:36.320 "num_base_bdevs": 2, 00:07:36.320 "num_base_bdevs_discovered": 0, 00:07:36.320 "num_base_bdevs_operational": 2, 00:07:36.320 "base_bdevs_list": [ 00:07:36.320 { 00:07:36.320 "name": "BaseBdev1", 00:07:36.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.320 "is_configured": false, 00:07:36.320 "data_offset": 0, 00:07:36.320 "data_size": 0 00:07:36.320 }, 00:07:36.320 { 00:07:36.320 "name": "BaseBdev2", 00:07:36.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.320 "is_configured": false, 00:07:36.320 "data_offset": 0, 00:07:36.320 "data_size": 0 00:07:36.320 } 00:07:36.320 ] 00:07:36.320 }' 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.320 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 [2024-11-26 22:52:15.714581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.889 [2024-11-26 22:52:15.714660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 [2024-11-26 22:52:15.726609] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.889 [2024-11-26 22:52:15.726694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.889 [2024-11-26 22:52:15.726741] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.889 [2024-11-26 22:52:15.726761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 [2024-11-26 22:52:15.747569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.889 BaseBdev1 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.889 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 [ 00:07:36.889 { 00:07:36.889 "name": "BaseBdev1", 00:07:36.889 "aliases": [ 00:07:36.889 "281b29bd-8669-4c79-b89d-aedda598ddc2" 00:07:36.889 ], 00:07:36.889 "product_name": "Malloc disk", 00:07:36.889 "block_size": 512, 00:07:36.889 "num_blocks": 65536, 00:07:36.889 "uuid": "281b29bd-8669-4c79-b89d-aedda598ddc2", 00:07:36.889 "assigned_rate_limits": { 00:07:36.889 "rw_ios_per_sec": 0, 00:07:36.889 "rw_mbytes_per_sec": 0, 00:07:36.889 "r_mbytes_per_sec": 0, 00:07:36.889 "w_mbytes_per_sec": 0 00:07:36.889 }, 00:07:36.889 "claimed": true, 00:07:36.889 "claim_type": "exclusive_write", 00:07:36.889 "zoned": false, 00:07:36.889 "supported_io_types": { 00:07:36.889 "read": true, 00:07:36.889 "write": true, 00:07:36.889 "unmap": true, 00:07:36.889 "flush": true, 00:07:36.889 "reset": true, 00:07:36.889 "nvme_admin": false, 00:07:36.889 "nvme_io": false, 00:07:36.889 "nvme_io_md": false, 00:07:36.889 "write_zeroes": true, 00:07:36.889 "zcopy": true, 00:07:36.889 "get_zone_info": false, 00:07:36.889 "zone_management": false, 00:07:36.889 "zone_append": false, 00:07:36.889 "compare": false, 00:07:36.889 "compare_and_write": false, 00:07:36.889 "abort": true, 00:07:36.889 "seek_hole": false, 00:07:36.889 "seek_data": false, 00:07:36.889 "copy": true, 00:07:36.890 "nvme_iov_md": false 00:07:36.890 }, 00:07:36.890 "memory_domains": [ 00:07:36.890 { 00:07:36.890 "dma_device_id": "system", 00:07:36.890 "dma_device_type": 1 00:07:36.890 }, 00:07:36.890 { 00:07:36.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.890 "dma_device_type": 2 00:07:36.890 } 00:07:36.890 ], 00:07:36.890 "driver_specific": {} 00:07:36.890 } 00:07:36.890 ] 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.890 "name": "Existed_Raid", 00:07:36.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.890 "strip_size_kb": 64, 00:07:36.890 "state": "configuring", 00:07:36.890 "raid_level": "concat", 00:07:36.890 "superblock": false, 00:07:36.890 "num_base_bdevs": 2, 00:07:36.890 "num_base_bdevs_discovered": 1, 00:07:36.890 "num_base_bdevs_operational": 2, 00:07:36.890 "base_bdevs_list": [ 00:07:36.890 { 00:07:36.890 "name": "BaseBdev1", 00:07:36.890 "uuid": "281b29bd-8669-4c79-b89d-aedda598ddc2", 00:07:36.890 "is_configured": true, 00:07:36.890 "data_offset": 0, 00:07:36.890 "data_size": 65536 00:07:36.890 }, 00:07:36.890 { 00:07:36.890 "name": "BaseBdev2", 00:07:36.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.890 "is_configured": false, 00:07:36.890 "data_offset": 0, 00:07:36.890 "data_size": 0 00:07:36.890 } 00:07:36.890 ] 00:07:36.890 }' 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.890 22:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 [2024-11-26 22:52:16.251763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.150 [2024-11-26 22:52:16.251816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 [2024-11-26 22:52:16.263792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.150 [2024-11-26 22:52:16.265665] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.150 [2024-11-26 22:52:16.265737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.150 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.411 "name": "Existed_Raid", 00:07:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.411 "strip_size_kb": 64, 00:07:37.411 "state": "configuring", 00:07:37.411 "raid_level": "concat", 00:07:37.411 "superblock": false, 00:07:37.411 "num_base_bdevs": 2, 00:07:37.411 "num_base_bdevs_discovered": 1, 00:07:37.411 "num_base_bdevs_operational": 2, 00:07:37.411 "base_bdevs_list": [ 00:07:37.411 { 00:07:37.411 "name": "BaseBdev1", 00:07:37.411 "uuid": "281b29bd-8669-4c79-b89d-aedda598ddc2", 00:07:37.411 "is_configured": true, 00:07:37.411 "data_offset": 0, 00:07:37.411 "data_size": 65536 00:07:37.411 }, 00:07:37.411 { 00:07:37.411 "name": "BaseBdev2", 00:07:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.411 "is_configured": false, 00:07:37.411 "data_offset": 0, 00:07:37.411 "data_size": 0 00:07:37.411 } 00:07:37.411 ] 00:07:37.411 }' 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.411 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 [2024-11-26 22:52:16.747058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.671 [2024-11-26 22:52:16.747196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:37.671 [2024-11-26 22:52:16.747226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.671 BaseBdev2 00:07:37.671 [2024-11-26 22:52:16.747502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:37.671 [2024-11-26 22:52:16.747678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:37.671 [2024-11-26 22:52:16.747688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:37.671 [2024-11-26 22:52:16.747875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 [ 00:07:37.671 { 00:07:37.671 "name": "BaseBdev2", 00:07:37.671 "aliases": [ 00:07:37.671 "2fc85a35-e417-4d4a-a160-df1c0f9e3778" 00:07:37.671 ], 00:07:37.671 "product_name": "Malloc disk", 00:07:37.671 "block_size": 512, 00:07:37.671 "num_blocks": 65536, 00:07:37.671 "uuid": "2fc85a35-e417-4d4a-a160-df1c0f9e3778", 00:07:37.671 "assigned_rate_limits": { 00:07:37.671 "rw_ios_per_sec": 0, 00:07:37.671 "rw_mbytes_per_sec": 0, 00:07:37.671 "r_mbytes_per_sec": 0, 00:07:37.671 "w_mbytes_per_sec": 0 00:07:37.671 }, 00:07:37.671 "claimed": true, 00:07:37.671 "claim_type": "exclusive_write", 00:07:37.671 "zoned": false, 00:07:37.671 "supported_io_types": { 00:07:37.671 "read": true, 00:07:37.671 "write": true, 00:07:37.671 "unmap": true, 00:07:37.671 "flush": true, 00:07:37.671 "reset": true, 00:07:37.671 "nvme_admin": false, 00:07:37.671 "nvme_io": false, 00:07:37.671 "nvme_io_md": false, 00:07:37.671 "write_zeroes": true, 00:07:37.671 "zcopy": true, 00:07:37.671 "get_zone_info": false, 00:07:37.671 "zone_management": false, 00:07:37.671 "zone_append": false, 00:07:37.671 "compare": false, 00:07:37.671 "compare_and_write": false, 00:07:37.671 "abort": true, 00:07:37.671 "seek_hole": false, 00:07:37.671 "seek_data": false, 00:07:37.671 "copy": true, 00:07:37.671 "nvme_iov_md": false 00:07:37.671 }, 00:07:37.671 "memory_domains": [ 00:07:37.671 { 00:07:37.671 "dma_device_id": "system", 00:07:37.671 "dma_device_type": 1 00:07:37.671 }, 00:07:37.671 { 00:07:37.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.671 "dma_device_type": 2 00:07:37.671 } 00:07:37.671 ], 00:07:37.671 "driver_specific": {} 00:07:37.671 } 00:07:37.671 ] 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.671 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.672 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.931 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.931 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.931 "name": "Existed_Raid", 00:07:37.931 "uuid": "cb3d241e-ec45-4bcb-8b68-ab77bb55d425", 00:07:37.931 "strip_size_kb": 64, 00:07:37.931 "state": "online", 00:07:37.931 "raid_level": "concat", 00:07:37.931 "superblock": false, 00:07:37.931 "num_base_bdevs": 2, 00:07:37.931 "num_base_bdevs_discovered": 2, 00:07:37.931 "num_base_bdevs_operational": 2, 00:07:37.931 "base_bdevs_list": [ 00:07:37.931 { 00:07:37.931 "name": "BaseBdev1", 00:07:37.931 "uuid": "281b29bd-8669-4c79-b89d-aedda598ddc2", 00:07:37.931 "is_configured": true, 00:07:37.931 "data_offset": 0, 00:07:37.931 "data_size": 65536 00:07:37.931 }, 00:07:37.931 { 00:07:37.931 "name": "BaseBdev2", 00:07:37.931 "uuid": "2fc85a35-e417-4d4a-a160-df1c0f9e3778", 00:07:37.931 "is_configured": true, 00:07:37.931 "data_offset": 0, 00:07:37.931 "data_size": 65536 00:07:37.931 } 00:07:37.931 ] 00:07:37.931 }' 00:07:37.931 22:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.931 22:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.190 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.190 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.190 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.190 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.190 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.191 [2024-11-26 22:52:17.231499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.191 "name": "Existed_Raid", 00:07:38.191 "aliases": [ 00:07:38.191 "cb3d241e-ec45-4bcb-8b68-ab77bb55d425" 00:07:38.191 ], 00:07:38.191 "product_name": "Raid Volume", 00:07:38.191 "block_size": 512, 00:07:38.191 "num_blocks": 131072, 00:07:38.191 "uuid": "cb3d241e-ec45-4bcb-8b68-ab77bb55d425", 00:07:38.191 "assigned_rate_limits": { 00:07:38.191 "rw_ios_per_sec": 0, 00:07:38.191 "rw_mbytes_per_sec": 0, 00:07:38.191 "r_mbytes_per_sec": 0, 00:07:38.191 "w_mbytes_per_sec": 0 00:07:38.191 }, 00:07:38.191 "claimed": false, 00:07:38.191 "zoned": false, 00:07:38.191 "supported_io_types": { 00:07:38.191 "read": true, 00:07:38.191 "write": true, 00:07:38.191 "unmap": true, 00:07:38.191 "flush": true, 00:07:38.191 "reset": true, 00:07:38.191 "nvme_admin": false, 00:07:38.191 "nvme_io": false, 00:07:38.191 "nvme_io_md": false, 00:07:38.191 "write_zeroes": true, 00:07:38.191 "zcopy": false, 00:07:38.191 "get_zone_info": false, 00:07:38.191 "zone_management": false, 00:07:38.191 "zone_append": false, 00:07:38.191 "compare": false, 00:07:38.191 "compare_and_write": false, 00:07:38.191 "abort": false, 00:07:38.191 "seek_hole": false, 00:07:38.191 "seek_data": false, 00:07:38.191 "copy": false, 00:07:38.191 "nvme_iov_md": false 00:07:38.191 }, 00:07:38.191 "memory_domains": [ 00:07:38.191 { 00:07:38.191 "dma_device_id": "system", 00:07:38.191 "dma_device_type": 1 00:07:38.191 }, 00:07:38.191 { 00:07:38.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.191 "dma_device_type": 2 00:07:38.191 }, 00:07:38.191 { 00:07:38.191 "dma_device_id": "system", 00:07:38.191 "dma_device_type": 1 00:07:38.191 }, 00:07:38.191 { 00:07:38.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.191 "dma_device_type": 2 00:07:38.191 } 00:07:38.191 ], 00:07:38.191 "driver_specific": { 00:07:38.191 "raid": { 00:07:38.191 "uuid": "cb3d241e-ec45-4bcb-8b68-ab77bb55d425", 00:07:38.191 "strip_size_kb": 64, 00:07:38.191 "state": "online", 00:07:38.191 "raid_level": "concat", 00:07:38.191 "superblock": false, 00:07:38.191 "num_base_bdevs": 2, 00:07:38.191 "num_base_bdevs_discovered": 2, 00:07:38.191 "num_base_bdevs_operational": 2, 00:07:38.191 "base_bdevs_list": [ 00:07:38.191 { 00:07:38.191 "name": "BaseBdev1", 00:07:38.191 "uuid": "281b29bd-8669-4c79-b89d-aedda598ddc2", 00:07:38.191 "is_configured": true, 00:07:38.191 "data_offset": 0, 00:07:38.191 "data_size": 65536 00:07:38.191 }, 00:07:38.191 { 00:07:38.191 "name": "BaseBdev2", 00:07:38.191 "uuid": "2fc85a35-e417-4d4a-a160-df1c0f9e3778", 00:07:38.191 "is_configured": true, 00:07:38.191 "data_offset": 0, 00:07:38.191 "data_size": 65536 00:07:38.191 } 00:07:38.191 ] 00:07:38.191 } 00:07:38.191 } 00:07:38.191 }' 00:07:38.191 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.451 BaseBdev2' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.451 [2024-11-26 22:52:17.471348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.451 [2024-11-26 22:52:17.471415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.451 [2024-11-26 22:52:17.471515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.451 "name": "Existed_Raid", 00:07:38.451 "uuid": "cb3d241e-ec45-4bcb-8b68-ab77bb55d425", 00:07:38.451 "strip_size_kb": 64, 00:07:38.451 "state": "offline", 00:07:38.451 "raid_level": "concat", 00:07:38.451 "superblock": false, 00:07:38.451 "num_base_bdevs": 2, 00:07:38.451 "num_base_bdevs_discovered": 1, 00:07:38.451 "num_base_bdevs_operational": 1, 00:07:38.451 "base_bdevs_list": [ 00:07:38.451 { 00:07:38.451 "name": null, 00:07:38.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.451 "is_configured": false, 00:07:38.451 "data_offset": 0, 00:07:38.451 "data_size": 65536 00:07:38.451 }, 00:07:38.451 { 00:07:38.451 "name": "BaseBdev2", 00:07:38.451 "uuid": "2fc85a35-e417-4d4a-a160-df1c0f9e3778", 00:07:38.451 "is_configured": true, 00:07:38.451 "data_offset": 0, 00:07:38.451 "data_size": 65536 00:07:38.451 } 00:07:38.451 ] 00:07:38.451 }' 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.451 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.021 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.022 [2024-11-26 22:52:17.958674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.022 [2024-11-26 22:52:17.958794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.022 22:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74662 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74662 ']' 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74662 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74662 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.022 killing process with pid 74662 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74662' 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74662 00:07:39.022 [2024-11-26 22:52:18.071794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.022 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74662 00:07:39.022 [2024-11-26 22:52:18.072822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.281 22:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.281 00:07:39.281 real 0m3.950s 00:07:39.281 user 0m6.229s 00:07:39.281 sys 0m0.821s 00:07:39.281 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.281 ************************************ 00:07:39.282 END TEST raid_state_function_test 00:07:39.282 ************************************ 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 22:52:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:39.282 22:52:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.282 22:52:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.282 22:52:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 ************************************ 00:07:39.282 START TEST raid_state_function_test_sb 00:07:39.282 ************************************ 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.282 Process raid pid: 74904 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74904 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74904' 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74904 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74904 ']' 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.282 22:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.542 [2024-11-26 22:52:18.466411] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:39.542 [2024-11-26 22:52:18.466631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.542 [2024-11-26 22:52:18.601442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.542 [2024-11-26 22:52:18.641018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.542 [2024-11-26 22:52:18.666283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.801 [2024-11-26 22:52:18.708766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.801 [2024-11-26 22:52:18.708802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.387 [2024-11-26 22:52:19.280834] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.387 [2024-11-26 22:52:19.280961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.387 [2024-11-26 22:52:19.280994] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.387 [2024-11-26 22:52:19.281014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.387 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.388 "name": "Existed_Raid", 00:07:40.388 "uuid": "ef1cb071-0446-4154-90ed-0e03b96f4d2f", 00:07:40.388 "strip_size_kb": 64, 00:07:40.388 "state": "configuring", 00:07:40.388 "raid_level": "concat", 00:07:40.388 "superblock": true, 00:07:40.388 "num_base_bdevs": 2, 00:07:40.388 "num_base_bdevs_discovered": 0, 00:07:40.388 "num_base_bdevs_operational": 2, 00:07:40.388 "base_bdevs_list": [ 00:07:40.388 { 00:07:40.388 "name": "BaseBdev1", 00:07:40.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.388 "is_configured": false, 00:07:40.388 "data_offset": 0, 00:07:40.388 "data_size": 0 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": "BaseBdev2", 00:07:40.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.388 "is_configured": false, 00:07:40.388 "data_offset": 0, 00:07:40.388 "data_size": 0 00:07:40.388 } 00:07:40.388 ] 00:07:40.388 }' 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.388 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 [2024-11-26 22:52:19.672845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.647 [2024-11-26 22:52:19.672935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 [2024-11-26 22:52:19.684881] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.647 [2024-11-26 22:52:19.684966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.647 [2024-11-26 22:52:19.684998] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.647 [2024-11-26 22:52:19.685018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 [2024-11-26 22:52:19.706068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.648 BaseBdev1 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 [ 00:07:40.648 { 00:07:40.648 "name": "BaseBdev1", 00:07:40.648 "aliases": [ 00:07:40.648 "0d6f95be-94db-492f-8bc3-72702ce0cfe8" 00:07:40.648 ], 00:07:40.648 "product_name": "Malloc disk", 00:07:40.648 "block_size": 512, 00:07:40.648 "num_blocks": 65536, 00:07:40.648 "uuid": "0d6f95be-94db-492f-8bc3-72702ce0cfe8", 00:07:40.648 "assigned_rate_limits": { 00:07:40.648 "rw_ios_per_sec": 0, 00:07:40.648 "rw_mbytes_per_sec": 0, 00:07:40.648 "r_mbytes_per_sec": 0, 00:07:40.648 "w_mbytes_per_sec": 0 00:07:40.648 }, 00:07:40.648 "claimed": true, 00:07:40.648 "claim_type": "exclusive_write", 00:07:40.648 "zoned": false, 00:07:40.648 "supported_io_types": { 00:07:40.648 "read": true, 00:07:40.648 "write": true, 00:07:40.648 "unmap": true, 00:07:40.648 "flush": true, 00:07:40.648 "reset": true, 00:07:40.648 "nvme_admin": false, 00:07:40.648 "nvme_io": false, 00:07:40.648 "nvme_io_md": false, 00:07:40.648 "write_zeroes": true, 00:07:40.648 "zcopy": true, 00:07:40.648 "get_zone_info": false, 00:07:40.648 "zone_management": false, 00:07:40.648 "zone_append": false, 00:07:40.648 "compare": false, 00:07:40.648 "compare_and_write": false, 00:07:40.648 "abort": true, 00:07:40.648 "seek_hole": false, 00:07:40.648 "seek_data": false, 00:07:40.648 "copy": true, 00:07:40.648 "nvme_iov_md": false 00:07:40.648 }, 00:07:40.648 "memory_domains": [ 00:07:40.648 { 00:07:40.648 "dma_device_id": "system", 00:07:40.648 "dma_device_type": 1 00:07:40.648 }, 00:07:40.648 { 00:07:40.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.648 "dma_device_type": 2 00:07:40.648 } 00:07:40.648 ], 00:07:40.648 "driver_specific": {} 00:07:40.648 } 00:07:40.648 ] 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.648 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.908 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.908 "name": "Existed_Raid", 00:07:40.908 "uuid": "5cc8107e-5134-4d16-b59e-304bc5d8f2f6", 00:07:40.908 "strip_size_kb": 64, 00:07:40.908 "state": "configuring", 00:07:40.908 "raid_level": "concat", 00:07:40.908 "superblock": true, 00:07:40.908 "num_base_bdevs": 2, 00:07:40.908 "num_base_bdevs_discovered": 1, 00:07:40.908 "num_base_bdevs_operational": 2, 00:07:40.908 "base_bdevs_list": [ 00:07:40.908 { 00:07:40.908 "name": "BaseBdev1", 00:07:40.908 "uuid": "0d6f95be-94db-492f-8bc3-72702ce0cfe8", 00:07:40.908 "is_configured": true, 00:07:40.908 "data_offset": 2048, 00:07:40.908 "data_size": 63488 00:07:40.908 }, 00:07:40.908 { 00:07:40.908 "name": "BaseBdev2", 00:07:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.908 "is_configured": false, 00:07:40.908 "data_offset": 0, 00:07:40.908 "data_size": 0 00:07:40.908 } 00:07:40.908 ] 00:07:40.908 }' 00:07:40.908 22:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.908 22:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.168 [2024-11-26 22:52:20.162245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.168 [2024-11-26 22:52:20.162385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.168 [2024-11-26 22:52:20.174290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.168 [2024-11-26 22:52:20.176028] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.168 [2024-11-26 22:52:20.176102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.168 "name": "Existed_Raid", 00:07:41.168 "uuid": "94f200e8-a33d-453e-83be-e64351d00035", 00:07:41.168 "strip_size_kb": 64, 00:07:41.168 "state": "configuring", 00:07:41.168 "raid_level": "concat", 00:07:41.168 "superblock": true, 00:07:41.168 "num_base_bdevs": 2, 00:07:41.168 "num_base_bdevs_discovered": 1, 00:07:41.168 "num_base_bdevs_operational": 2, 00:07:41.168 "base_bdevs_list": [ 00:07:41.168 { 00:07:41.168 "name": "BaseBdev1", 00:07:41.168 "uuid": "0d6f95be-94db-492f-8bc3-72702ce0cfe8", 00:07:41.168 "is_configured": true, 00:07:41.168 "data_offset": 2048, 00:07:41.168 "data_size": 63488 00:07:41.168 }, 00:07:41.168 { 00:07:41.168 "name": "BaseBdev2", 00:07:41.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.168 "is_configured": false, 00:07:41.168 "data_offset": 0, 00:07:41.168 "data_size": 0 00:07:41.168 } 00:07:41.168 ] 00:07:41.168 }' 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.168 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.738 [2024-11-26 22:52:20.605436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.738 [2024-11-26 22:52:20.605703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:41.738 [2024-11-26 22:52:20.605743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.738 BaseBdev2 00:07:41.738 [2024-11-26 22:52:20.606060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:41.738 [2024-11-26 22:52:20.606219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:41.738 [2024-11-26 22:52:20.606236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:41.738 [2024-11-26 22:52:20.606371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.738 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.738 [ 00:07:41.738 { 00:07:41.738 "name": "BaseBdev2", 00:07:41.738 "aliases": [ 00:07:41.738 "737ad5f5-e032-4ca4-928a-fe522045a07f" 00:07:41.738 ], 00:07:41.738 "product_name": "Malloc disk", 00:07:41.738 "block_size": 512, 00:07:41.738 "num_blocks": 65536, 00:07:41.738 "uuid": "737ad5f5-e032-4ca4-928a-fe522045a07f", 00:07:41.738 "assigned_rate_limits": { 00:07:41.738 "rw_ios_per_sec": 0, 00:07:41.738 "rw_mbytes_per_sec": 0, 00:07:41.738 "r_mbytes_per_sec": 0, 00:07:41.738 "w_mbytes_per_sec": 0 00:07:41.738 }, 00:07:41.738 "claimed": true, 00:07:41.738 "claim_type": "exclusive_write", 00:07:41.738 "zoned": false, 00:07:41.738 "supported_io_types": { 00:07:41.738 "read": true, 00:07:41.738 "write": true, 00:07:41.738 "unmap": true, 00:07:41.738 "flush": true, 00:07:41.738 "reset": true, 00:07:41.738 "nvme_admin": false, 00:07:41.738 "nvme_io": false, 00:07:41.738 "nvme_io_md": false, 00:07:41.738 "write_zeroes": true, 00:07:41.738 "zcopy": true, 00:07:41.738 "get_zone_info": false, 00:07:41.738 "zone_management": false, 00:07:41.738 "zone_append": false, 00:07:41.738 "compare": false, 00:07:41.738 "compare_and_write": false, 00:07:41.738 "abort": true, 00:07:41.738 "seek_hole": false, 00:07:41.738 "seek_data": false, 00:07:41.738 "copy": true, 00:07:41.738 "nvme_iov_md": false 00:07:41.738 }, 00:07:41.738 "memory_domains": [ 00:07:41.738 { 00:07:41.739 "dma_device_id": "system", 00:07:41.739 "dma_device_type": 1 00:07:41.739 }, 00:07:41.739 { 00:07:41.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.739 "dma_device_type": 2 00:07:41.739 } 00:07:41.739 ], 00:07:41.739 "driver_specific": {} 00:07:41.739 } 00:07:41.739 ] 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.739 "name": "Existed_Raid", 00:07:41.739 "uuid": "94f200e8-a33d-453e-83be-e64351d00035", 00:07:41.739 "strip_size_kb": 64, 00:07:41.739 "state": "online", 00:07:41.739 "raid_level": "concat", 00:07:41.739 "superblock": true, 00:07:41.739 "num_base_bdevs": 2, 00:07:41.739 "num_base_bdevs_discovered": 2, 00:07:41.739 "num_base_bdevs_operational": 2, 00:07:41.739 "base_bdevs_list": [ 00:07:41.739 { 00:07:41.739 "name": "BaseBdev1", 00:07:41.739 "uuid": "0d6f95be-94db-492f-8bc3-72702ce0cfe8", 00:07:41.739 "is_configured": true, 00:07:41.739 "data_offset": 2048, 00:07:41.739 "data_size": 63488 00:07:41.739 }, 00:07:41.739 { 00:07:41.739 "name": "BaseBdev2", 00:07:41.739 "uuid": "737ad5f5-e032-4ca4-928a-fe522045a07f", 00:07:41.739 "is_configured": true, 00:07:41.739 "data_offset": 2048, 00:07:41.739 "data_size": 63488 00:07:41.739 } 00:07:41.739 ] 00:07:41.739 }' 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.739 22:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.998 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.999 [2024-11-26 22:52:21.069831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.999 "name": "Existed_Raid", 00:07:41.999 "aliases": [ 00:07:41.999 "94f200e8-a33d-453e-83be-e64351d00035" 00:07:41.999 ], 00:07:41.999 "product_name": "Raid Volume", 00:07:41.999 "block_size": 512, 00:07:41.999 "num_blocks": 126976, 00:07:41.999 "uuid": "94f200e8-a33d-453e-83be-e64351d00035", 00:07:41.999 "assigned_rate_limits": { 00:07:41.999 "rw_ios_per_sec": 0, 00:07:41.999 "rw_mbytes_per_sec": 0, 00:07:41.999 "r_mbytes_per_sec": 0, 00:07:41.999 "w_mbytes_per_sec": 0 00:07:41.999 }, 00:07:41.999 "claimed": false, 00:07:41.999 "zoned": false, 00:07:41.999 "supported_io_types": { 00:07:41.999 "read": true, 00:07:41.999 "write": true, 00:07:41.999 "unmap": true, 00:07:41.999 "flush": true, 00:07:41.999 "reset": true, 00:07:41.999 "nvme_admin": false, 00:07:41.999 "nvme_io": false, 00:07:41.999 "nvme_io_md": false, 00:07:41.999 "write_zeroes": true, 00:07:41.999 "zcopy": false, 00:07:41.999 "get_zone_info": false, 00:07:41.999 "zone_management": false, 00:07:41.999 "zone_append": false, 00:07:41.999 "compare": false, 00:07:41.999 "compare_and_write": false, 00:07:41.999 "abort": false, 00:07:41.999 "seek_hole": false, 00:07:41.999 "seek_data": false, 00:07:41.999 "copy": false, 00:07:41.999 "nvme_iov_md": false 00:07:41.999 }, 00:07:41.999 "memory_domains": [ 00:07:41.999 { 00:07:41.999 "dma_device_id": "system", 00:07:41.999 "dma_device_type": 1 00:07:41.999 }, 00:07:41.999 { 00:07:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.999 "dma_device_type": 2 00:07:41.999 }, 00:07:41.999 { 00:07:41.999 "dma_device_id": "system", 00:07:41.999 "dma_device_type": 1 00:07:41.999 }, 00:07:41.999 { 00:07:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.999 "dma_device_type": 2 00:07:41.999 } 00:07:41.999 ], 00:07:41.999 "driver_specific": { 00:07:41.999 "raid": { 00:07:41.999 "uuid": "94f200e8-a33d-453e-83be-e64351d00035", 00:07:41.999 "strip_size_kb": 64, 00:07:41.999 "state": "online", 00:07:41.999 "raid_level": "concat", 00:07:41.999 "superblock": true, 00:07:41.999 "num_base_bdevs": 2, 00:07:41.999 "num_base_bdevs_discovered": 2, 00:07:41.999 "num_base_bdevs_operational": 2, 00:07:41.999 "base_bdevs_list": [ 00:07:41.999 { 00:07:41.999 "name": "BaseBdev1", 00:07:41.999 "uuid": "0d6f95be-94db-492f-8bc3-72702ce0cfe8", 00:07:41.999 "is_configured": true, 00:07:41.999 "data_offset": 2048, 00:07:41.999 "data_size": 63488 00:07:41.999 }, 00:07:41.999 { 00:07:41.999 "name": "BaseBdev2", 00:07:41.999 "uuid": "737ad5f5-e032-4ca4-928a-fe522045a07f", 00:07:41.999 "is_configured": true, 00:07:41.999 "data_offset": 2048, 00:07:41.999 "data_size": 63488 00:07:41.999 } 00:07:41.999 ] 00:07:41.999 } 00:07:41.999 } 00:07:41.999 }' 00:07:41.999 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.274 BaseBdev2' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.274 [2024-11-26 22:52:21.297732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.274 [2024-11-26 22:52:21.297760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.274 [2024-11-26 22:52:21.297814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.274 "name": "Existed_Raid", 00:07:42.274 "uuid": "94f200e8-a33d-453e-83be-e64351d00035", 00:07:42.274 "strip_size_kb": 64, 00:07:42.274 "state": "offline", 00:07:42.274 "raid_level": "concat", 00:07:42.274 "superblock": true, 00:07:42.274 "num_base_bdevs": 2, 00:07:42.274 "num_base_bdevs_discovered": 1, 00:07:42.274 "num_base_bdevs_operational": 1, 00:07:42.274 "base_bdevs_list": [ 00:07:42.274 { 00:07:42.274 "name": null, 00:07:42.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.274 "is_configured": false, 00:07:42.274 "data_offset": 0, 00:07:42.274 "data_size": 63488 00:07:42.274 }, 00:07:42.274 { 00:07:42.274 "name": "BaseBdev2", 00:07:42.274 "uuid": "737ad5f5-e032-4ca4-928a-fe522045a07f", 00:07:42.274 "is_configured": true, 00:07:42.274 "data_offset": 2048, 00:07:42.274 "data_size": 63488 00:07:42.274 } 00:07:42.274 ] 00:07:42.274 }' 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.274 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 [2024-11-26 22:52:21.721502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.845 [2024-11-26 22:52:21.721609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74904 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74904 ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74904 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74904 00:07:42.845 killing process with pid 74904 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74904' 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74904 00:07:42.845 [2024-11-26 22:52:21.814302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.845 22:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74904 00:07:42.845 [2024-11-26 22:52:21.815240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.106 22:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.106 00:07:43.106 real 0m3.668s 00:07:43.106 user 0m5.677s 00:07:43.106 sys 0m0.816s 00:07:43.106 22:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.106 ************************************ 00:07:43.106 END TEST raid_state_function_test_sb 00:07:43.106 ************************************ 00:07:43.106 22:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 22:52:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:43.106 22:52:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.106 22:52:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.106 22:52:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 ************************************ 00:07:43.106 START TEST raid_superblock_test 00:07:43.106 ************************************ 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75134 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75134 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75134 ']' 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.106 22:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 [2024-11-26 22:52:22.199938] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:43.106 [2024-11-26 22:52:22.200127] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:07:43.367 [2024-11-26 22:52:22.339408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.367 [2024-11-26 22:52:22.376982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.367 [2024-11-26 22:52:22.402310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.367 [2024-11-26 22:52:22.445175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.367 [2024-11-26 22:52:22.445303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.936 malloc1 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.936 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.936 [2024-11-26 22:52:23.034419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.936 [2024-11-26 22:52:23.034595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.936 [2024-11-26 22:52:23.034649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:43.936 [2024-11-26 22:52:23.034682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.937 [2024-11-26 22:52:23.036832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.937 [2024-11-26 22:52:23.036908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.937 pt1 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.937 malloc2 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.937 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.197 [2024-11-26 22:52:23.063184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.197 [2024-11-26 22:52:23.063325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.197 [2024-11-26 22:52:23.063364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:44.197 [2024-11-26 22:52:23.063397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.197 [2024-11-26 22:52:23.065439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.197 [2024-11-26 22:52:23.065506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.197 pt2 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.197 [2024-11-26 22:52:23.075210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.197 [2024-11-26 22:52:23.076950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.197 [2024-11-26 22:52:23.077117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:44.197 [2024-11-26 22:52:23.077168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.197 [2024-11-26 22:52:23.077439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:44.197 [2024-11-26 22:52:23.077602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:44.197 [2024-11-26 22:52:23.077644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:44.197 [2024-11-26 22:52:23.077791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.197 "name": "raid_bdev1", 00:07:44.197 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:44.197 "strip_size_kb": 64, 00:07:44.197 "state": "online", 00:07:44.197 "raid_level": "concat", 00:07:44.197 "superblock": true, 00:07:44.197 "num_base_bdevs": 2, 00:07:44.197 "num_base_bdevs_discovered": 2, 00:07:44.197 "num_base_bdevs_operational": 2, 00:07:44.197 "base_bdevs_list": [ 00:07:44.197 { 00:07:44.197 "name": "pt1", 00:07:44.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.197 "is_configured": true, 00:07:44.197 "data_offset": 2048, 00:07:44.197 "data_size": 63488 00:07:44.197 }, 00:07:44.197 { 00:07:44.197 "name": "pt2", 00:07:44.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.197 "is_configured": true, 00:07:44.197 "data_offset": 2048, 00:07:44.197 "data_size": 63488 00:07:44.197 } 00:07:44.197 ] 00:07:44.197 }' 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.197 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.457 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.458 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.458 [2024-11-26 22:52:23.531673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.458 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.458 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.458 "name": "raid_bdev1", 00:07:44.458 "aliases": [ 00:07:44.458 "89091592-910d-4fb4-aa20-81622577c927" 00:07:44.458 ], 00:07:44.458 "product_name": "Raid Volume", 00:07:44.458 "block_size": 512, 00:07:44.458 "num_blocks": 126976, 00:07:44.458 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:44.458 "assigned_rate_limits": { 00:07:44.458 "rw_ios_per_sec": 0, 00:07:44.458 "rw_mbytes_per_sec": 0, 00:07:44.458 "r_mbytes_per_sec": 0, 00:07:44.458 "w_mbytes_per_sec": 0 00:07:44.458 }, 00:07:44.458 "claimed": false, 00:07:44.458 "zoned": false, 00:07:44.458 "supported_io_types": { 00:07:44.458 "read": true, 00:07:44.458 "write": true, 00:07:44.458 "unmap": true, 00:07:44.458 "flush": true, 00:07:44.458 "reset": true, 00:07:44.458 "nvme_admin": false, 00:07:44.458 "nvme_io": false, 00:07:44.458 "nvme_io_md": false, 00:07:44.458 "write_zeroes": true, 00:07:44.458 "zcopy": false, 00:07:44.458 "get_zone_info": false, 00:07:44.458 "zone_management": false, 00:07:44.458 "zone_append": false, 00:07:44.458 "compare": false, 00:07:44.458 "compare_and_write": false, 00:07:44.458 "abort": false, 00:07:44.458 "seek_hole": false, 00:07:44.458 "seek_data": false, 00:07:44.458 "copy": false, 00:07:44.458 "nvme_iov_md": false 00:07:44.458 }, 00:07:44.458 "memory_domains": [ 00:07:44.458 { 00:07:44.458 "dma_device_id": "system", 00:07:44.458 "dma_device_type": 1 00:07:44.458 }, 00:07:44.458 { 00:07:44.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.458 "dma_device_type": 2 00:07:44.458 }, 00:07:44.458 { 00:07:44.458 "dma_device_id": "system", 00:07:44.458 "dma_device_type": 1 00:07:44.458 }, 00:07:44.458 { 00:07:44.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.458 "dma_device_type": 2 00:07:44.458 } 00:07:44.458 ], 00:07:44.458 "driver_specific": { 00:07:44.458 "raid": { 00:07:44.458 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:44.458 "strip_size_kb": 64, 00:07:44.458 "state": "online", 00:07:44.458 "raid_level": "concat", 00:07:44.458 "superblock": true, 00:07:44.458 "num_base_bdevs": 2, 00:07:44.458 "num_base_bdevs_discovered": 2, 00:07:44.458 "num_base_bdevs_operational": 2, 00:07:44.458 "base_bdevs_list": [ 00:07:44.458 { 00:07:44.458 "name": "pt1", 00:07:44.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.458 "is_configured": true, 00:07:44.458 "data_offset": 2048, 00:07:44.458 "data_size": 63488 00:07:44.458 }, 00:07:44.458 { 00:07:44.458 "name": "pt2", 00:07:44.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.458 "is_configured": true, 00:07:44.458 "data_offset": 2048, 00:07:44.458 "data_size": 63488 00:07:44.458 } 00:07:44.458 ] 00:07:44.458 } 00:07:44.458 } 00:07:44.458 }' 00:07:44.458 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.718 pt2' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:44.718 [2024-11-26 22:52:23.775599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89091592-910d-4fb4-aa20-81622577c927 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89091592-910d-4fb4-aa20-81622577c927 ']' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.718 [2024-11-26 22:52:23.823388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.718 [2024-11-26 22:52:23.823456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.718 [2024-11-26 22:52:23.823571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.718 [2024-11-26 22:52:23.823634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.718 [2024-11-26 22:52:23.823719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:44.718 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 [2024-11-26 22:52:23.963470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.989 [2024-11-26 22:52:23.965205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.989 [2024-11-26 22:52:23.965320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:44.989 [2024-11-26 22:52:23.965397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:44.989 [2024-11-26 22:52:23.965449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.989 [2024-11-26 22:52:23.965509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:44.989 request: 00:07:44.989 { 00:07:44.989 "name": "raid_bdev1", 00:07:44.989 "raid_level": "concat", 00:07:44.989 "base_bdevs": [ 00:07:44.989 "malloc1", 00:07:44.989 "malloc2" 00:07:44.989 ], 00:07:44.989 "strip_size_kb": 64, 00:07:44.989 "superblock": false, 00:07:44.989 "method": "bdev_raid_create", 00:07:44.989 "req_id": 1 00:07:44.989 } 00:07:44.989 Got JSON-RPC error response 00:07:44.989 response: 00:07:44.989 { 00:07:44.989 "code": -17, 00:07:44.989 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.989 } 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 22:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 [2024-11-26 22:52:24.031452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.989 [2024-11-26 22:52:24.031551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.989 [2024-11-26 22:52:24.031597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:44.989 [2024-11-26 22:52:24.031628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.989 [2024-11-26 22:52:24.033613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.989 [2024-11-26 22:52:24.033679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.989 [2024-11-26 22:52:24.033774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:44.989 [2024-11-26 22:52:24.033831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.989 pt1 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.989 "name": "raid_bdev1", 00:07:44.989 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:44.989 "strip_size_kb": 64, 00:07:44.989 "state": "configuring", 00:07:44.989 "raid_level": "concat", 00:07:44.989 "superblock": true, 00:07:44.989 "num_base_bdevs": 2, 00:07:44.989 "num_base_bdevs_discovered": 1, 00:07:44.989 "num_base_bdevs_operational": 2, 00:07:44.989 "base_bdevs_list": [ 00:07:44.989 { 00:07:44.989 "name": "pt1", 00:07:44.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.989 "is_configured": true, 00:07:44.989 "data_offset": 2048, 00:07:44.989 "data_size": 63488 00:07:44.989 }, 00:07:44.989 { 00:07:44.989 "name": null, 00:07:44.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.989 "is_configured": false, 00:07:44.989 "data_offset": 2048, 00:07:44.989 "data_size": 63488 00:07:44.989 } 00:07:44.989 ] 00:07:44.989 }' 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.989 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.559 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.559 [2024-11-26 22:52:24.523606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.559 [2024-11-26 22:52:24.523740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.559 [2024-11-26 22:52:24.523782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:45.559 [2024-11-26 22:52:24.523812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.559 [2024-11-26 22:52:24.524234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.559 [2024-11-26 22:52:24.524312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.559 [2024-11-26 22:52:24.524418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:45.560 [2024-11-26 22:52:24.524470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.560 [2024-11-26 22:52:24.524574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:45.560 [2024-11-26 22:52:24.524614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.560 [2024-11-26 22:52:24.524871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:45.560 [2024-11-26 22:52:24.525019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:45.560 [2024-11-26 22:52:24.525055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:45.560 [2024-11-26 22:52:24.525189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.560 pt2 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.560 "name": "raid_bdev1", 00:07:45.560 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:45.560 "strip_size_kb": 64, 00:07:45.560 "state": "online", 00:07:45.560 "raid_level": "concat", 00:07:45.560 "superblock": true, 00:07:45.560 "num_base_bdevs": 2, 00:07:45.560 "num_base_bdevs_discovered": 2, 00:07:45.560 "num_base_bdevs_operational": 2, 00:07:45.560 "base_bdevs_list": [ 00:07:45.560 { 00:07:45.560 "name": "pt1", 00:07:45.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.560 "is_configured": true, 00:07:45.560 "data_offset": 2048, 00:07:45.560 "data_size": 63488 00:07:45.560 }, 00:07:45.560 { 00:07:45.560 "name": "pt2", 00:07:45.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.560 "is_configured": true, 00:07:45.560 "data_offset": 2048, 00:07:45.560 "data_size": 63488 00:07:45.560 } 00:07:45.560 ] 00:07:45.560 }' 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.560 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 [2024-11-26 22:52:24.968003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.130 "name": "raid_bdev1", 00:07:46.130 "aliases": [ 00:07:46.130 "89091592-910d-4fb4-aa20-81622577c927" 00:07:46.130 ], 00:07:46.130 "product_name": "Raid Volume", 00:07:46.130 "block_size": 512, 00:07:46.130 "num_blocks": 126976, 00:07:46.130 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:46.130 "assigned_rate_limits": { 00:07:46.130 "rw_ios_per_sec": 0, 00:07:46.130 "rw_mbytes_per_sec": 0, 00:07:46.130 "r_mbytes_per_sec": 0, 00:07:46.130 "w_mbytes_per_sec": 0 00:07:46.130 }, 00:07:46.130 "claimed": false, 00:07:46.130 "zoned": false, 00:07:46.130 "supported_io_types": { 00:07:46.130 "read": true, 00:07:46.130 "write": true, 00:07:46.130 "unmap": true, 00:07:46.130 "flush": true, 00:07:46.130 "reset": true, 00:07:46.130 "nvme_admin": false, 00:07:46.130 "nvme_io": false, 00:07:46.130 "nvme_io_md": false, 00:07:46.130 "write_zeroes": true, 00:07:46.130 "zcopy": false, 00:07:46.130 "get_zone_info": false, 00:07:46.130 "zone_management": false, 00:07:46.130 "zone_append": false, 00:07:46.130 "compare": false, 00:07:46.130 "compare_and_write": false, 00:07:46.130 "abort": false, 00:07:46.130 "seek_hole": false, 00:07:46.130 "seek_data": false, 00:07:46.130 "copy": false, 00:07:46.130 "nvme_iov_md": false 00:07:46.130 }, 00:07:46.130 "memory_domains": [ 00:07:46.130 { 00:07:46.130 "dma_device_id": "system", 00:07:46.130 "dma_device_type": 1 00:07:46.130 }, 00:07:46.130 { 00:07:46.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.130 "dma_device_type": 2 00:07:46.130 }, 00:07:46.130 { 00:07:46.130 "dma_device_id": "system", 00:07:46.130 "dma_device_type": 1 00:07:46.130 }, 00:07:46.130 { 00:07:46.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.130 "dma_device_type": 2 00:07:46.130 } 00:07:46.130 ], 00:07:46.130 "driver_specific": { 00:07:46.130 "raid": { 00:07:46.130 "uuid": "89091592-910d-4fb4-aa20-81622577c927", 00:07:46.130 "strip_size_kb": 64, 00:07:46.130 "state": "online", 00:07:46.130 "raid_level": "concat", 00:07:46.130 "superblock": true, 00:07:46.130 "num_base_bdevs": 2, 00:07:46.130 "num_base_bdevs_discovered": 2, 00:07:46.130 "num_base_bdevs_operational": 2, 00:07:46.130 "base_bdevs_list": [ 00:07:46.130 { 00:07:46.130 "name": "pt1", 00:07:46.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.130 "is_configured": true, 00:07:46.130 "data_offset": 2048, 00:07:46.130 "data_size": 63488 00:07:46.130 }, 00:07:46.130 { 00:07:46.130 "name": "pt2", 00:07:46.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.130 "is_configured": true, 00:07:46.130 "data_offset": 2048, 00:07:46.130 "data_size": 63488 00:07:46.130 } 00:07:46.130 ] 00:07:46.130 } 00:07:46.130 } 00:07:46.130 }' 00:07:46.130 22:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.130 pt2' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:46.130 [2024-11-26 22:52:25.183999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89091592-910d-4fb4-aa20-81622577c927 '!=' 89091592-910d-4fb4-aa20-81622577c927 ']' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75134 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75134 ']' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75134 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75134 00:07:46.130 killing process with pid 75134 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75134' 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75134 00:07:46.130 [2024-11-26 22:52:25.248206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.130 [2024-11-26 22:52:25.248308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.130 [2024-11-26 22:52:25.248357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.130 [2024-11-26 22:52:25.248369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:46.130 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75134 00:07:46.390 [2024-11-26 22:52:25.271452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.391 22:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:46.391 00:07:46.391 real 0m3.382s 00:07:46.391 user 0m5.219s 00:07:46.391 sys 0m0.725s 00:07:46.391 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.391 ************************************ 00:07:46.391 END TEST raid_superblock_test 00:07:46.391 ************************************ 00:07:46.391 22:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.650 22:52:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:46.650 22:52:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.650 22:52:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.651 22:52:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 ************************************ 00:07:46.651 START TEST raid_read_error_test 00:07:46.651 ************************************ 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HVKqTPvQKC 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75335 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75335 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75335 ']' 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.651 22:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 [2024-11-26 22:52:25.662492] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:46.651 [2024-11-26 22:52:25.662608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:07:46.910 [2024-11-26 22:52:25.795854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.910 [2024-11-26 22:52:25.833524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.910 [2024-11-26 22:52:25.858915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.910 [2024-11-26 22:52:25.901358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.910 [2024-11-26 22:52:25.901397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 BaseBdev1_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 true 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 [2024-11-26 22:52:26.510370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:47.479 [2024-11-26 22:52:26.510438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.479 [2024-11-26 22:52:26.510460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:47.479 [2024-11-26 22:52:26.510480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.479 [2024-11-26 22:52:26.512458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.479 [2024-11-26 22:52:26.512495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:47.479 BaseBdev1 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 BaseBdev2_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 true 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 [2024-11-26 22:52:26.550962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.479 [2024-11-26 22:52:26.551014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.479 [2024-11-26 22:52:26.551044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.479 [2024-11-26 22:52:26.551055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.479 [2024-11-26 22:52:26.552963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.479 [2024-11-26 22:52:26.553000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.479 BaseBdev2 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 [2024-11-26 22:52:26.563008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.479 [2024-11-26 22:52:26.564785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.479 [2024-11-26 22:52:26.564949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:47.479 [2024-11-26 22:52:26.564977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.479 [2024-11-26 22:52:26.565199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:47.479 [2024-11-26 22:52:26.565367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:47.479 [2024-11-26 22:52:26.565383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:47.479 [2024-11-26 22:52:26.565514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.479 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.480 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.738 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.739 "name": "raid_bdev1", 00:07:47.739 "uuid": "f5d99485-510c-48d0-8828-a64c4cd4b035", 00:07:47.739 "strip_size_kb": 64, 00:07:47.739 "state": "online", 00:07:47.739 "raid_level": "concat", 00:07:47.739 "superblock": true, 00:07:47.739 "num_base_bdevs": 2, 00:07:47.739 "num_base_bdevs_discovered": 2, 00:07:47.739 "num_base_bdevs_operational": 2, 00:07:47.739 "base_bdevs_list": [ 00:07:47.739 { 00:07:47.739 "name": "BaseBdev1", 00:07:47.739 "uuid": "c8e8963c-7b3f-546c-b116-073873851758", 00:07:47.739 "is_configured": true, 00:07:47.739 "data_offset": 2048, 00:07:47.739 "data_size": 63488 00:07:47.739 }, 00:07:47.739 { 00:07:47.739 "name": "BaseBdev2", 00:07:47.739 "uuid": "99e6020a-0ca4-5f39-b9ff-cf8384c9505b", 00:07:47.739 "is_configured": true, 00:07:47.739 "data_offset": 2048, 00:07:47.739 "data_size": 63488 00:07:47.739 } 00:07:47.739 ] 00:07:47.739 }' 00:07:47.739 22:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.739 22:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.999 22:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.999 22:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.999 [2024-11-26 22:52:27.103501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.937 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.938 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.197 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.197 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.197 "name": "raid_bdev1", 00:07:49.197 "uuid": "f5d99485-510c-48d0-8828-a64c4cd4b035", 00:07:49.197 "strip_size_kb": 64, 00:07:49.197 "state": "online", 00:07:49.197 "raid_level": "concat", 00:07:49.197 "superblock": true, 00:07:49.197 "num_base_bdevs": 2, 00:07:49.197 "num_base_bdevs_discovered": 2, 00:07:49.197 "num_base_bdevs_operational": 2, 00:07:49.197 "base_bdevs_list": [ 00:07:49.197 { 00:07:49.197 "name": "BaseBdev1", 00:07:49.197 "uuid": "c8e8963c-7b3f-546c-b116-073873851758", 00:07:49.197 "is_configured": true, 00:07:49.197 "data_offset": 2048, 00:07:49.197 "data_size": 63488 00:07:49.197 }, 00:07:49.197 { 00:07:49.197 "name": "BaseBdev2", 00:07:49.197 "uuid": "99e6020a-0ca4-5f39-b9ff-cf8384c9505b", 00:07:49.197 "is_configured": true, 00:07:49.197 "data_offset": 2048, 00:07:49.197 "data_size": 63488 00:07:49.197 } 00:07:49.197 ] 00:07:49.197 }' 00:07:49.197 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.197 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.458 [2024-11-26 22:52:28.521870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.458 [2024-11-26 22:52:28.521916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.458 [2024-11-26 22:52:28.524552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.458 [2024-11-26 22:52:28.524607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.458 [2024-11-26 22:52:28.524639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.458 [2024-11-26 22:52:28.524652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:49.458 { 00:07:49.458 "results": [ 00:07:49.458 { 00:07:49.458 "job": "raid_bdev1", 00:07:49.458 "core_mask": "0x1", 00:07:49.458 "workload": "randrw", 00:07:49.458 "percentage": 50, 00:07:49.458 "status": "finished", 00:07:49.458 "queue_depth": 1, 00:07:49.458 "io_size": 131072, 00:07:49.458 "runtime": 1.416678, 00:07:49.458 "iops": 17579.153484419185, 00:07:49.458 "mibps": 2197.394185552398, 00:07:49.458 "io_failed": 1, 00:07:49.458 "io_timeout": 0, 00:07:49.458 "avg_latency_us": 78.34292119519877, 00:07:49.458 "min_latency_us": 24.43301664778175, 00:07:49.458 "max_latency_us": 1328.085069293123 00:07:49.458 } 00:07:49.458 ], 00:07:49.458 "core_count": 1 00:07:49.458 } 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75335 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75335 ']' 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75335 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75335 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.458 killing process with pid 75335 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75335' 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75335 00:07:49.458 [2024-11-26 22:52:28.571867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.458 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75335 00:07:49.718 [2024-11-26 22:52:28.587497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HVKqTPvQKC 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:49.719 00:07:49.719 real 0m3.243s 00:07:49.719 user 0m4.154s 00:07:49.719 sys 0m0.495s 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.719 22:52:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.719 ************************************ 00:07:49.719 END TEST raid_read_error_test 00:07:49.719 ************************************ 00:07:49.979 22:52:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:49.979 22:52:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.979 22:52:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.979 22:52:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.979 ************************************ 00:07:49.979 START TEST raid_write_error_test 00:07:49.979 ************************************ 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vxddi87zvP 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75464 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75464 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75464 ']' 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.979 22:52:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.979 [2024-11-26 22:52:28.973982] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:49.979 [2024-11-26 22:52:28.974124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ] 00:07:50.239 [2024-11-26 22:52:29.108270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.239 [2024-11-26 22:52:29.144450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.239 [2024-11-26 22:52:29.169464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.239 [2024-11-26 22:52:29.211924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.239 [2024-11-26 22:52:29.211965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.809 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 BaseBdev1_malloc 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 true 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 [2024-11-26 22:52:29.825238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.810 [2024-11-26 22:52:29.825320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.810 [2024-11-26 22:52:29.825356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.810 [2024-11-26 22:52:29.825369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.810 [2024-11-26 22:52:29.827415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.810 [2024-11-26 22:52:29.827453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.810 BaseBdev1 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 BaseBdev2_malloc 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 true 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 [2024-11-26 22:52:29.865953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.810 [2024-11-26 22:52:29.866012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.810 [2024-11-26 22:52:29.866043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.810 [2024-11-26 22:52:29.866052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.810 [2024-11-26 22:52:29.868034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.810 [2024-11-26 22:52:29.868083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.810 BaseBdev2 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 [2024-11-26 22:52:29.877992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.810 [2024-11-26 22:52:29.879742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.810 [2024-11-26 22:52:29.879902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:50.810 [2024-11-26 22:52:29.879924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.810 [2024-11-26 22:52:29.880140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:50.810 [2024-11-26 22:52:29.880303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:50.810 [2024-11-26 22:52:29.880320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:50.810 [2024-11-26 22:52:29.880443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.070 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.070 "name": "raid_bdev1", 00:07:51.070 "uuid": "c88be215-bf41-482f-b8bd-b1d978b9f1f6", 00:07:51.070 "strip_size_kb": 64, 00:07:51.070 "state": "online", 00:07:51.070 "raid_level": "concat", 00:07:51.070 "superblock": true, 00:07:51.070 "num_base_bdevs": 2, 00:07:51.070 "num_base_bdevs_discovered": 2, 00:07:51.070 "num_base_bdevs_operational": 2, 00:07:51.070 "base_bdevs_list": [ 00:07:51.070 { 00:07:51.070 "name": "BaseBdev1", 00:07:51.070 "uuid": "90bc11b0-0388-5ff5-a83d-d30703524da7", 00:07:51.070 "is_configured": true, 00:07:51.070 "data_offset": 2048, 00:07:51.070 "data_size": 63488 00:07:51.070 }, 00:07:51.070 { 00:07:51.070 "name": "BaseBdev2", 00:07:51.070 "uuid": "9ee13aaa-252d-52b9-8b5c-ecd6f0259abc", 00:07:51.070 "is_configured": true, 00:07:51.070 "data_offset": 2048, 00:07:51.070 "data_size": 63488 00:07:51.070 } 00:07:51.070 ] 00:07:51.070 }' 00:07:51.070 22:52:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.070 22:52:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.369 22:52:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.369 22:52:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:51.369 [2024-11-26 22:52:30.418539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:52.322 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.323 "name": "raid_bdev1", 00:07:52.323 "uuid": "c88be215-bf41-482f-b8bd-b1d978b9f1f6", 00:07:52.323 "strip_size_kb": 64, 00:07:52.323 "state": "online", 00:07:52.323 "raid_level": "concat", 00:07:52.323 "superblock": true, 00:07:52.323 "num_base_bdevs": 2, 00:07:52.323 "num_base_bdevs_discovered": 2, 00:07:52.323 "num_base_bdevs_operational": 2, 00:07:52.323 "base_bdevs_list": [ 00:07:52.323 { 00:07:52.323 "name": "BaseBdev1", 00:07:52.323 "uuid": "90bc11b0-0388-5ff5-a83d-d30703524da7", 00:07:52.323 "is_configured": true, 00:07:52.323 "data_offset": 2048, 00:07:52.323 "data_size": 63488 00:07:52.323 }, 00:07:52.323 { 00:07:52.323 "name": "BaseBdev2", 00:07:52.323 "uuid": "9ee13aaa-252d-52b9-8b5c-ecd6f0259abc", 00:07:52.323 "is_configured": true, 00:07:52.323 "data_offset": 2048, 00:07:52.323 "data_size": 63488 00:07:52.323 } 00:07:52.323 ] 00:07:52.323 }' 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.323 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.894 [2024-11-26 22:52:31.801120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.894 [2024-11-26 22:52:31.801172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.894 [2024-11-26 22:52:31.803699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.894 [2024-11-26 22:52:31.803848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.894 [2024-11-26 22:52:31.803888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.894 [2024-11-26 22:52:31.803899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:52.894 { 00:07:52.894 "results": [ 00:07:52.894 { 00:07:52.894 "job": "raid_bdev1", 00:07:52.894 "core_mask": "0x1", 00:07:52.894 "workload": "randrw", 00:07:52.894 "percentage": 50, 00:07:52.894 "status": "finished", 00:07:52.894 "queue_depth": 1, 00:07:52.894 "io_size": 131072, 00:07:52.894 "runtime": 1.380726, 00:07:52.894 "iops": 17083.766076687192, 00:07:52.894 "mibps": 2135.470759585899, 00:07:52.894 "io_failed": 1, 00:07:52.894 "io_timeout": 0, 00:07:52.894 "avg_latency_us": 80.65277882576983, 00:07:52.894 "min_latency_us": 24.76771550597054, 00:07:52.894 "max_latency_us": 1406.6277346814259 00:07:52.894 } 00:07:52.894 ], 00:07:52.894 "core_count": 1 00:07:52.894 } 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75464 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75464 ']' 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75464 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75464 00:07:52.894 killing process with pid 75464 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75464' 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75464 00:07:52.894 [2024-11-26 22:52:31.846315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.894 22:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75464 00:07:52.894 [2024-11-26 22:52:31.862338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vxddi87zvP 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:53.155 00:07:53.155 real 0m3.209s 00:07:53.155 user 0m4.090s 00:07:53.155 sys 0m0.512s 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.155 ************************************ 00:07:53.155 END TEST raid_write_error_test 00:07:53.155 ************************************ 00:07:53.155 22:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 22:52:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:53.155 22:52:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:53.155 22:52:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.155 22:52:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.155 22:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 ************************************ 00:07:53.155 START TEST raid_state_function_test 00:07:53.155 ************************************ 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:53.155 Process raid pid: 75596 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75596 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75596' 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75596 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75596 ']' 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.155 22:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 [2024-11-26 22:52:32.247845] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:53.155 [2024-11-26 22:52:32.247976] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.415 [2024-11-26 22:52:32.383825] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.415 [2024-11-26 22:52:32.421030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.415 [2024-11-26 22:52:32.449736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.415 [2024-11-26 22:52:32.491571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.415 [2024-11-26 22:52:32.491679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.984 [2024-11-26 22:52:33.075329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.984 [2024-11-26 22:52:33.075481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.984 [2024-11-26 22:52:33.075497] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.984 [2024-11-26 22:52:33.075504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.984 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.244 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.244 "name": "Existed_Raid", 00:07:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.244 "strip_size_kb": 0, 00:07:54.244 "state": "configuring", 00:07:54.244 "raid_level": "raid1", 00:07:54.244 "superblock": false, 00:07:54.244 "num_base_bdevs": 2, 00:07:54.244 "num_base_bdevs_discovered": 0, 00:07:54.244 "num_base_bdevs_operational": 2, 00:07:54.244 "base_bdevs_list": [ 00:07:54.244 { 00:07:54.244 "name": "BaseBdev1", 00:07:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.244 "is_configured": false, 00:07:54.244 "data_offset": 0, 00:07:54.244 "data_size": 0 00:07:54.244 }, 00:07:54.244 { 00:07:54.244 "name": "BaseBdev2", 00:07:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.244 "is_configured": false, 00:07:54.244 "data_offset": 0, 00:07:54.244 "data_size": 0 00:07:54.244 } 00:07:54.244 ] 00:07:54.244 }' 00:07:54.244 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.244 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 [2024-11-26 22:52:33.503365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.505 [2024-11-26 22:52:33.503475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 [2024-11-26 22:52:33.515395] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.505 [2024-11-26 22:52:33.515477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.505 [2024-11-26 22:52:33.515509] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.505 [2024-11-26 22:52:33.515529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 [2024-11-26 22:52:33.536020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.505 BaseBdev1 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.505 [ 00:07:54.505 { 00:07:54.505 "name": "BaseBdev1", 00:07:54.505 "aliases": [ 00:07:54.505 "80400d3b-54c3-451a-a06e-ee597ad1bfce" 00:07:54.505 ], 00:07:54.505 "product_name": "Malloc disk", 00:07:54.505 "block_size": 512, 00:07:54.505 "num_blocks": 65536, 00:07:54.505 "uuid": "80400d3b-54c3-451a-a06e-ee597ad1bfce", 00:07:54.505 "assigned_rate_limits": { 00:07:54.505 "rw_ios_per_sec": 0, 00:07:54.505 "rw_mbytes_per_sec": 0, 00:07:54.505 "r_mbytes_per_sec": 0, 00:07:54.505 "w_mbytes_per_sec": 0 00:07:54.505 }, 00:07:54.505 "claimed": true, 00:07:54.505 "claim_type": "exclusive_write", 00:07:54.505 "zoned": false, 00:07:54.505 "supported_io_types": { 00:07:54.505 "read": true, 00:07:54.505 "write": true, 00:07:54.505 "unmap": true, 00:07:54.505 "flush": true, 00:07:54.505 "reset": true, 00:07:54.505 "nvme_admin": false, 00:07:54.505 "nvme_io": false, 00:07:54.505 "nvme_io_md": false, 00:07:54.505 "write_zeroes": true, 00:07:54.505 "zcopy": true, 00:07:54.505 "get_zone_info": false, 00:07:54.505 "zone_management": false, 00:07:54.505 "zone_append": false, 00:07:54.505 "compare": false, 00:07:54.505 "compare_and_write": false, 00:07:54.505 "abort": true, 00:07:54.505 "seek_hole": false, 00:07:54.505 "seek_data": false, 00:07:54.505 "copy": true, 00:07:54.505 "nvme_iov_md": false 00:07:54.505 }, 00:07:54.505 "memory_domains": [ 00:07:54.505 { 00:07:54.505 "dma_device_id": "system", 00:07:54.505 "dma_device_type": 1 00:07:54.505 }, 00:07:54.505 { 00:07:54.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.505 "dma_device_type": 2 00:07:54.505 } 00:07:54.505 ], 00:07:54.505 "driver_specific": {} 00:07:54.505 } 00:07:54.505 ] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.505 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.506 "name": "Existed_Raid", 00:07:54.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.506 "strip_size_kb": 0, 00:07:54.506 "state": "configuring", 00:07:54.506 "raid_level": "raid1", 00:07:54.506 "superblock": false, 00:07:54.506 "num_base_bdevs": 2, 00:07:54.506 "num_base_bdevs_discovered": 1, 00:07:54.506 "num_base_bdevs_operational": 2, 00:07:54.506 "base_bdevs_list": [ 00:07:54.506 { 00:07:54.506 "name": "BaseBdev1", 00:07:54.506 "uuid": "80400d3b-54c3-451a-a06e-ee597ad1bfce", 00:07:54.506 "is_configured": true, 00:07:54.506 "data_offset": 0, 00:07:54.506 "data_size": 65536 00:07:54.506 }, 00:07:54.506 { 00:07:54.506 "name": "BaseBdev2", 00:07:54.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.506 "is_configured": false, 00:07:54.506 "data_offset": 0, 00:07:54.506 "data_size": 0 00:07:54.506 } 00:07:54.506 ] 00:07:54.506 }' 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.506 22:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.076 [2024-11-26 22:52:34.016174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.076 [2024-11-26 22:52:34.016312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.076 [2024-11-26 22:52:34.028201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.076 [2024-11-26 22:52:34.029887] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.076 [2024-11-26 22:52:34.029926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.076 "name": "Existed_Raid", 00:07:55.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.076 "strip_size_kb": 0, 00:07:55.076 "state": "configuring", 00:07:55.076 "raid_level": "raid1", 00:07:55.076 "superblock": false, 00:07:55.076 "num_base_bdevs": 2, 00:07:55.076 "num_base_bdevs_discovered": 1, 00:07:55.076 "num_base_bdevs_operational": 2, 00:07:55.076 "base_bdevs_list": [ 00:07:55.076 { 00:07:55.076 "name": "BaseBdev1", 00:07:55.076 "uuid": "80400d3b-54c3-451a-a06e-ee597ad1bfce", 00:07:55.076 "is_configured": true, 00:07:55.076 "data_offset": 0, 00:07:55.076 "data_size": 65536 00:07:55.076 }, 00:07:55.076 { 00:07:55.076 "name": "BaseBdev2", 00:07:55.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.076 "is_configured": false, 00:07:55.076 "data_offset": 0, 00:07:55.076 "data_size": 0 00:07:55.076 } 00:07:55.076 ] 00:07:55.076 }' 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.076 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.646 [2024-11-26 22:52:34.499369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.646 [2024-11-26 22:52:34.499504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:55.646 [2024-11-26 22:52:34.499541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:55.646 [2024-11-26 22:52:34.499827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:55.646 [2024-11-26 22:52:34.500023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:55.646 [2024-11-26 22:52:34.500071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:55.646 [2024-11-26 22:52:34.500343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.646 BaseBdev2 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.646 [ 00:07:55.646 { 00:07:55.646 "name": "BaseBdev2", 00:07:55.646 "aliases": [ 00:07:55.646 "23101b75-534f-49c7-b91a-0361cfddcb42" 00:07:55.646 ], 00:07:55.646 "product_name": "Malloc disk", 00:07:55.646 "block_size": 512, 00:07:55.646 "num_blocks": 65536, 00:07:55.646 "uuid": "23101b75-534f-49c7-b91a-0361cfddcb42", 00:07:55.646 "assigned_rate_limits": { 00:07:55.646 "rw_ios_per_sec": 0, 00:07:55.646 "rw_mbytes_per_sec": 0, 00:07:55.646 "r_mbytes_per_sec": 0, 00:07:55.646 "w_mbytes_per_sec": 0 00:07:55.646 }, 00:07:55.646 "claimed": true, 00:07:55.646 "claim_type": "exclusive_write", 00:07:55.646 "zoned": false, 00:07:55.646 "supported_io_types": { 00:07:55.646 "read": true, 00:07:55.646 "write": true, 00:07:55.646 "unmap": true, 00:07:55.646 "flush": true, 00:07:55.646 "reset": true, 00:07:55.646 "nvme_admin": false, 00:07:55.646 "nvme_io": false, 00:07:55.646 "nvme_io_md": false, 00:07:55.646 "write_zeroes": true, 00:07:55.646 "zcopy": true, 00:07:55.646 "get_zone_info": false, 00:07:55.646 "zone_management": false, 00:07:55.646 "zone_append": false, 00:07:55.646 "compare": false, 00:07:55.646 "compare_and_write": false, 00:07:55.646 "abort": true, 00:07:55.646 "seek_hole": false, 00:07:55.646 "seek_data": false, 00:07:55.646 "copy": true, 00:07:55.646 "nvme_iov_md": false 00:07:55.646 }, 00:07:55.646 "memory_domains": [ 00:07:55.646 { 00:07:55.646 "dma_device_id": "system", 00:07:55.646 "dma_device_type": 1 00:07:55.646 }, 00:07:55.646 { 00:07:55.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.646 "dma_device_type": 2 00:07:55.646 } 00:07:55.646 ], 00:07:55.646 "driver_specific": {} 00:07:55.646 } 00:07:55.646 ] 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.646 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.647 "name": "Existed_Raid", 00:07:55.647 "uuid": "61ffec06-86b4-44ba-95d6-310248f2d725", 00:07:55.647 "strip_size_kb": 0, 00:07:55.647 "state": "online", 00:07:55.647 "raid_level": "raid1", 00:07:55.647 "superblock": false, 00:07:55.647 "num_base_bdevs": 2, 00:07:55.647 "num_base_bdevs_discovered": 2, 00:07:55.647 "num_base_bdevs_operational": 2, 00:07:55.647 "base_bdevs_list": [ 00:07:55.647 { 00:07:55.647 "name": "BaseBdev1", 00:07:55.647 "uuid": "80400d3b-54c3-451a-a06e-ee597ad1bfce", 00:07:55.647 "is_configured": true, 00:07:55.647 "data_offset": 0, 00:07:55.647 "data_size": 65536 00:07:55.647 }, 00:07:55.647 { 00:07:55.647 "name": "BaseBdev2", 00:07:55.647 "uuid": "23101b75-534f-49c7-b91a-0361cfddcb42", 00:07:55.647 "is_configured": true, 00:07:55.647 "data_offset": 0, 00:07:55.647 "data_size": 65536 00:07:55.647 } 00:07:55.647 ] 00:07:55.647 }' 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.647 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.906 [2024-11-26 22:52:34.963805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.906 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.906 "name": "Existed_Raid", 00:07:55.906 "aliases": [ 00:07:55.906 "61ffec06-86b4-44ba-95d6-310248f2d725" 00:07:55.906 ], 00:07:55.906 "product_name": "Raid Volume", 00:07:55.907 "block_size": 512, 00:07:55.907 "num_blocks": 65536, 00:07:55.907 "uuid": "61ffec06-86b4-44ba-95d6-310248f2d725", 00:07:55.907 "assigned_rate_limits": { 00:07:55.907 "rw_ios_per_sec": 0, 00:07:55.907 "rw_mbytes_per_sec": 0, 00:07:55.907 "r_mbytes_per_sec": 0, 00:07:55.907 "w_mbytes_per_sec": 0 00:07:55.907 }, 00:07:55.907 "claimed": false, 00:07:55.907 "zoned": false, 00:07:55.907 "supported_io_types": { 00:07:55.907 "read": true, 00:07:55.907 "write": true, 00:07:55.907 "unmap": false, 00:07:55.907 "flush": false, 00:07:55.907 "reset": true, 00:07:55.907 "nvme_admin": false, 00:07:55.907 "nvme_io": false, 00:07:55.907 "nvme_io_md": false, 00:07:55.907 "write_zeroes": true, 00:07:55.907 "zcopy": false, 00:07:55.907 "get_zone_info": false, 00:07:55.907 "zone_management": false, 00:07:55.907 "zone_append": false, 00:07:55.907 "compare": false, 00:07:55.907 "compare_and_write": false, 00:07:55.907 "abort": false, 00:07:55.907 "seek_hole": false, 00:07:55.907 "seek_data": false, 00:07:55.907 "copy": false, 00:07:55.907 "nvme_iov_md": false 00:07:55.907 }, 00:07:55.907 "memory_domains": [ 00:07:55.907 { 00:07:55.907 "dma_device_id": "system", 00:07:55.907 "dma_device_type": 1 00:07:55.907 }, 00:07:55.907 { 00:07:55.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.907 "dma_device_type": 2 00:07:55.907 }, 00:07:55.907 { 00:07:55.907 "dma_device_id": "system", 00:07:55.907 "dma_device_type": 1 00:07:55.907 }, 00:07:55.907 { 00:07:55.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.907 "dma_device_type": 2 00:07:55.907 } 00:07:55.907 ], 00:07:55.907 "driver_specific": { 00:07:55.907 "raid": { 00:07:55.907 "uuid": "61ffec06-86b4-44ba-95d6-310248f2d725", 00:07:55.907 "strip_size_kb": 0, 00:07:55.907 "state": "online", 00:07:55.907 "raid_level": "raid1", 00:07:55.907 "superblock": false, 00:07:55.907 "num_base_bdevs": 2, 00:07:55.907 "num_base_bdevs_discovered": 2, 00:07:55.907 "num_base_bdevs_operational": 2, 00:07:55.907 "base_bdevs_list": [ 00:07:55.907 { 00:07:55.907 "name": "BaseBdev1", 00:07:55.907 "uuid": "80400d3b-54c3-451a-a06e-ee597ad1bfce", 00:07:55.907 "is_configured": true, 00:07:55.907 "data_offset": 0, 00:07:55.907 "data_size": 65536 00:07:55.907 }, 00:07:55.907 { 00:07:55.907 "name": "BaseBdev2", 00:07:55.907 "uuid": "23101b75-534f-49c7-b91a-0361cfddcb42", 00:07:55.907 "is_configured": true, 00:07:55.907 "data_offset": 0, 00:07:55.907 "data_size": 65536 00:07:55.907 } 00:07:55.907 ] 00:07:55.907 } 00:07:55.907 } 00:07:55.907 }' 00:07:55.907 22:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:56.167 BaseBdev2' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.167 [2024-11-26 22:52:35.171660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.167 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.168 "name": "Existed_Raid", 00:07:56.168 "uuid": "61ffec06-86b4-44ba-95d6-310248f2d725", 00:07:56.168 "strip_size_kb": 0, 00:07:56.168 "state": "online", 00:07:56.168 "raid_level": "raid1", 00:07:56.168 "superblock": false, 00:07:56.168 "num_base_bdevs": 2, 00:07:56.168 "num_base_bdevs_discovered": 1, 00:07:56.168 "num_base_bdevs_operational": 1, 00:07:56.168 "base_bdevs_list": [ 00:07:56.168 { 00:07:56.168 "name": null, 00:07:56.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.168 "is_configured": false, 00:07:56.168 "data_offset": 0, 00:07:56.168 "data_size": 65536 00:07:56.168 }, 00:07:56.168 { 00:07:56.168 "name": "BaseBdev2", 00:07:56.168 "uuid": "23101b75-534f-49c7-b91a-0361cfddcb42", 00:07:56.168 "is_configured": true, 00:07:56.168 "data_offset": 0, 00:07:56.168 "data_size": 65536 00:07:56.168 } 00:07:56.168 ] 00:07:56.168 }' 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.168 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.738 [2024-11-26 22:52:35.626844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.738 [2024-11-26 22:52:35.626939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.738 [2024-11-26 22:52:35.638230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.738 [2024-11-26 22:52:35.638292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.738 [2024-11-26 22:52:35.638304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75596 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75596 ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75596 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75596 00:07:56.738 killing process with pid 75596 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75596' 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75596 00:07:56.738 [2024-11-26 22:52:35.737022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.738 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75596 00:07:56.738 [2024-11-26 22:52:35.737979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.999 22:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.999 00:07:56.999 real 0m3.803s 00:07:56.999 user 0m5.986s 00:07:56.999 sys 0m0.784s 00:07:56.999 ************************************ 00:07:56.999 END TEST raid_state_function_test 00:07:56.999 ************************************ 00:07:56.999 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.999 22:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.999 22:52:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:56.999 22:52:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.999 22:52:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.999 22:52:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.999 ************************************ 00:07:56.999 START TEST raid_state_function_test_sb 00:07:56.999 ************************************ 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:56.999 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75833 00:07:57.000 Process raid pid: 75833 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75833' 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75833 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75833 ']' 00:07:57.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.000 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.260 [2024-11-26 22:52:36.135715] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:57.260 [2024-11-26 22:52:36.135853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.260 [2024-11-26 22:52:36.271459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.260 [2024-11-26 22:52:36.309638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.260 [2024-11-26 22:52:36.336083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.260 [2024-11-26 22:52:36.378699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.260 [2024-11-26 22:52:36.378728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.200 [2024-11-26 22:52:36.966711] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.200 [2024-11-26 22:52:36.966772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.200 [2024-11-26 22:52:36.966787] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.200 [2024-11-26 22:52:36.966795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.200 22:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.200 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.200 "name": "Existed_Raid", 00:07:58.200 "uuid": "50854dc5-a063-46fa-b916-e3bd40ccd103", 00:07:58.200 "strip_size_kb": 0, 00:07:58.200 "state": "configuring", 00:07:58.200 "raid_level": "raid1", 00:07:58.200 "superblock": true, 00:07:58.200 "num_base_bdevs": 2, 00:07:58.200 "num_base_bdevs_discovered": 0, 00:07:58.200 "num_base_bdevs_operational": 2, 00:07:58.200 "base_bdevs_list": [ 00:07:58.200 { 00:07:58.200 "name": "BaseBdev1", 00:07:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.200 "is_configured": false, 00:07:58.200 "data_offset": 0, 00:07:58.200 "data_size": 0 00:07:58.200 }, 00:07:58.200 { 00:07:58.200 "name": "BaseBdev2", 00:07:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.200 "is_configured": false, 00:07:58.200 "data_offset": 0, 00:07:58.200 "data_size": 0 00:07:58.200 } 00:07:58.200 ] 00:07:58.200 }' 00:07:58.200 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.200 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.460 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.460 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.460 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.460 [2024-11-26 22:52:37.394716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.460 [2024-11-26 22:52:37.394825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.461 [2024-11-26 22:52:37.406745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.461 [2024-11-26 22:52:37.406828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.461 [2024-11-26 22:52:37.406857] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.461 [2024-11-26 22:52:37.406877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.461 [2024-11-26 22:52:37.427712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.461 BaseBdev1 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.461 [ 00:07:58.461 { 00:07:58.461 "name": "BaseBdev1", 00:07:58.461 "aliases": [ 00:07:58.461 "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1" 00:07:58.461 ], 00:07:58.461 "product_name": "Malloc disk", 00:07:58.461 "block_size": 512, 00:07:58.461 "num_blocks": 65536, 00:07:58.461 "uuid": "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1", 00:07:58.461 "assigned_rate_limits": { 00:07:58.461 "rw_ios_per_sec": 0, 00:07:58.461 "rw_mbytes_per_sec": 0, 00:07:58.461 "r_mbytes_per_sec": 0, 00:07:58.461 "w_mbytes_per_sec": 0 00:07:58.461 }, 00:07:58.461 "claimed": true, 00:07:58.461 "claim_type": "exclusive_write", 00:07:58.461 "zoned": false, 00:07:58.461 "supported_io_types": { 00:07:58.461 "read": true, 00:07:58.461 "write": true, 00:07:58.461 "unmap": true, 00:07:58.461 "flush": true, 00:07:58.461 "reset": true, 00:07:58.461 "nvme_admin": false, 00:07:58.461 "nvme_io": false, 00:07:58.461 "nvme_io_md": false, 00:07:58.461 "write_zeroes": true, 00:07:58.461 "zcopy": true, 00:07:58.461 "get_zone_info": false, 00:07:58.461 "zone_management": false, 00:07:58.461 "zone_append": false, 00:07:58.461 "compare": false, 00:07:58.461 "compare_and_write": false, 00:07:58.461 "abort": true, 00:07:58.461 "seek_hole": false, 00:07:58.461 "seek_data": false, 00:07:58.461 "copy": true, 00:07:58.461 "nvme_iov_md": false 00:07:58.461 }, 00:07:58.461 "memory_domains": [ 00:07:58.461 { 00:07:58.461 "dma_device_id": "system", 00:07:58.461 "dma_device_type": 1 00:07:58.461 }, 00:07:58.461 { 00:07:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.461 "dma_device_type": 2 00:07:58.461 } 00:07:58.461 ], 00:07:58.461 "driver_specific": {} 00:07:58.461 } 00:07:58.461 ] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.461 "name": "Existed_Raid", 00:07:58.461 "uuid": "f1853acc-af9b-4ec1-b5f5-3b9cf4eae8fa", 00:07:58.461 "strip_size_kb": 0, 00:07:58.461 "state": "configuring", 00:07:58.461 "raid_level": "raid1", 00:07:58.461 "superblock": true, 00:07:58.461 "num_base_bdevs": 2, 00:07:58.461 "num_base_bdevs_discovered": 1, 00:07:58.461 "num_base_bdevs_operational": 2, 00:07:58.461 "base_bdevs_list": [ 00:07:58.461 { 00:07:58.461 "name": "BaseBdev1", 00:07:58.461 "uuid": "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1", 00:07:58.461 "is_configured": true, 00:07:58.461 "data_offset": 2048, 00:07:58.461 "data_size": 63488 00:07:58.461 }, 00:07:58.461 { 00:07:58.461 "name": "BaseBdev2", 00:07:58.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.461 "is_configured": false, 00:07:58.461 "data_offset": 0, 00:07:58.461 "data_size": 0 00:07:58.461 } 00:07:58.461 ] 00:07:58.461 }' 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.461 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.031 [2024-11-26 22:52:37.919900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.031 [2024-11-26 22:52:37.920048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.031 [2024-11-26 22:52:37.931926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.031 [2024-11-26 22:52:37.933810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.031 [2024-11-26 22:52:37.933883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.031 "name": "Existed_Raid", 00:07:59.031 "uuid": "19ce521c-7e35-4957-a1e5-c1b61917e14f", 00:07:59.031 "strip_size_kb": 0, 00:07:59.031 "state": "configuring", 00:07:59.031 "raid_level": "raid1", 00:07:59.031 "superblock": true, 00:07:59.031 "num_base_bdevs": 2, 00:07:59.031 "num_base_bdevs_discovered": 1, 00:07:59.031 "num_base_bdevs_operational": 2, 00:07:59.031 "base_bdevs_list": [ 00:07:59.031 { 00:07:59.031 "name": "BaseBdev1", 00:07:59.031 "uuid": "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1", 00:07:59.031 "is_configured": true, 00:07:59.031 "data_offset": 2048, 00:07:59.031 "data_size": 63488 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "name": "BaseBdev2", 00:07:59.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.031 "is_configured": false, 00:07:59.031 "data_offset": 0, 00:07:59.031 "data_size": 0 00:07:59.031 } 00:07:59.031 ] 00:07:59.031 }' 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.031 22:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.290 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.290 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.290 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.291 [2024-11-26 22:52:38.334954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.291 [2024-11-26 22:52:38.335144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:59.291 [2024-11-26 22:52:38.335161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.291 [2024-11-26 22:52:38.335417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:59.291 BaseBdev2 00:07:59.291 [2024-11-26 22:52:38.335561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:59.291 [2024-11-26 22:52:38.335577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:59.291 [2024-11-26 22:52:38.335705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.291 [ 00:07:59.291 { 00:07:59.291 "name": "BaseBdev2", 00:07:59.291 "aliases": [ 00:07:59.291 "36b51ee0-8f4a-440b-a071-586095c01148" 00:07:59.291 ], 00:07:59.291 "product_name": "Malloc disk", 00:07:59.291 "block_size": 512, 00:07:59.291 "num_blocks": 65536, 00:07:59.291 "uuid": "36b51ee0-8f4a-440b-a071-586095c01148", 00:07:59.291 "assigned_rate_limits": { 00:07:59.291 "rw_ios_per_sec": 0, 00:07:59.291 "rw_mbytes_per_sec": 0, 00:07:59.291 "r_mbytes_per_sec": 0, 00:07:59.291 "w_mbytes_per_sec": 0 00:07:59.291 }, 00:07:59.291 "claimed": true, 00:07:59.291 "claim_type": "exclusive_write", 00:07:59.291 "zoned": false, 00:07:59.291 "supported_io_types": { 00:07:59.291 "read": true, 00:07:59.291 "write": true, 00:07:59.291 "unmap": true, 00:07:59.291 "flush": true, 00:07:59.291 "reset": true, 00:07:59.291 "nvme_admin": false, 00:07:59.291 "nvme_io": false, 00:07:59.291 "nvme_io_md": false, 00:07:59.291 "write_zeroes": true, 00:07:59.291 "zcopy": true, 00:07:59.291 "get_zone_info": false, 00:07:59.291 "zone_management": false, 00:07:59.291 "zone_append": false, 00:07:59.291 "compare": false, 00:07:59.291 "compare_and_write": false, 00:07:59.291 "abort": true, 00:07:59.291 "seek_hole": false, 00:07:59.291 "seek_data": false, 00:07:59.291 "copy": true, 00:07:59.291 "nvme_iov_md": false 00:07:59.291 }, 00:07:59.291 "memory_domains": [ 00:07:59.291 { 00:07:59.291 "dma_device_id": "system", 00:07:59.291 "dma_device_type": 1 00:07:59.291 }, 00:07:59.291 { 00:07:59.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.291 "dma_device_type": 2 00:07:59.291 } 00:07:59.291 ], 00:07:59.291 "driver_specific": {} 00:07:59.291 } 00:07:59.291 ] 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.291 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.550 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.550 "name": "Existed_Raid", 00:07:59.550 "uuid": "19ce521c-7e35-4957-a1e5-c1b61917e14f", 00:07:59.550 "strip_size_kb": 0, 00:07:59.550 "state": "online", 00:07:59.550 "raid_level": "raid1", 00:07:59.550 "superblock": true, 00:07:59.550 "num_base_bdevs": 2, 00:07:59.550 "num_base_bdevs_discovered": 2, 00:07:59.550 "num_base_bdevs_operational": 2, 00:07:59.550 "base_bdevs_list": [ 00:07:59.550 { 00:07:59.550 "name": "BaseBdev1", 00:07:59.550 "uuid": "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1", 00:07:59.550 "is_configured": true, 00:07:59.550 "data_offset": 2048, 00:07:59.550 "data_size": 63488 00:07:59.550 }, 00:07:59.550 { 00:07:59.550 "name": "BaseBdev2", 00:07:59.550 "uuid": "36b51ee0-8f4a-440b-a071-586095c01148", 00:07:59.550 "is_configured": true, 00:07:59.550 "data_offset": 2048, 00:07:59.550 "data_size": 63488 00:07:59.550 } 00:07:59.550 ] 00:07:59.550 }' 00:07:59.550 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.550 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.811 [2024-11-26 22:52:38.791404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.811 "name": "Existed_Raid", 00:07:59.811 "aliases": [ 00:07:59.811 "19ce521c-7e35-4957-a1e5-c1b61917e14f" 00:07:59.811 ], 00:07:59.811 "product_name": "Raid Volume", 00:07:59.811 "block_size": 512, 00:07:59.811 "num_blocks": 63488, 00:07:59.811 "uuid": "19ce521c-7e35-4957-a1e5-c1b61917e14f", 00:07:59.811 "assigned_rate_limits": { 00:07:59.811 "rw_ios_per_sec": 0, 00:07:59.811 "rw_mbytes_per_sec": 0, 00:07:59.811 "r_mbytes_per_sec": 0, 00:07:59.811 "w_mbytes_per_sec": 0 00:07:59.811 }, 00:07:59.811 "claimed": false, 00:07:59.811 "zoned": false, 00:07:59.811 "supported_io_types": { 00:07:59.811 "read": true, 00:07:59.811 "write": true, 00:07:59.811 "unmap": false, 00:07:59.811 "flush": false, 00:07:59.811 "reset": true, 00:07:59.811 "nvme_admin": false, 00:07:59.811 "nvme_io": false, 00:07:59.811 "nvme_io_md": false, 00:07:59.811 "write_zeroes": true, 00:07:59.811 "zcopy": false, 00:07:59.811 "get_zone_info": false, 00:07:59.811 "zone_management": false, 00:07:59.811 "zone_append": false, 00:07:59.811 "compare": false, 00:07:59.811 "compare_and_write": false, 00:07:59.811 "abort": false, 00:07:59.811 "seek_hole": false, 00:07:59.811 "seek_data": false, 00:07:59.811 "copy": false, 00:07:59.811 "nvme_iov_md": false 00:07:59.811 }, 00:07:59.811 "memory_domains": [ 00:07:59.811 { 00:07:59.811 "dma_device_id": "system", 00:07:59.811 "dma_device_type": 1 00:07:59.811 }, 00:07:59.811 { 00:07:59.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.811 "dma_device_type": 2 00:07:59.811 }, 00:07:59.811 { 00:07:59.811 "dma_device_id": "system", 00:07:59.811 "dma_device_type": 1 00:07:59.811 }, 00:07:59.811 { 00:07:59.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.811 "dma_device_type": 2 00:07:59.811 } 00:07:59.811 ], 00:07:59.811 "driver_specific": { 00:07:59.811 "raid": { 00:07:59.811 "uuid": "19ce521c-7e35-4957-a1e5-c1b61917e14f", 00:07:59.811 "strip_size_kb": 0, 00:07:59.811 "state": "online", 00:07:59.811 "raid_level": "raid1", 00:07:59.811 "superblock": true, 00:07:59.811 "num_base_bdevs": 2, 00:07:59.811 "num_base_bdevs_discovered": 2, 00:07:59.811 "num_base_bdevs_operational": 2, 00:07:59.811 "base_bdevs_list": [ 00:07:59.811 { 00:07:59.811 "name": "BaseBdev1", 00:07:59.811 "uuid": "0a01d3ea-9d09-4bf5-8c0d-5cd80b0eb3b1", 00:07:59.811 "is_configured": true, 00:07:59.811 "data_offset": 2048, 00:07:59.811 "data_size": 63488 00:07:59.811 }, 00:07:59.811 { 00:07:59.811 "name": "BaseBdev2", 00:07:59.811 "uuid": "36b51ee0-8f4a-440b-a071-586095c01148", 00:07:59.811 "is_configured": true, 00:07:59.811 "data_offset": 2048, 00:07:59.811 "data_size": 63488 00:07:59.811 } 00:07:59.811 ] 00:07:59.811 } 00:07:59.811 } 00:07:59.811 }' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.811 BaseBdev2' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.811 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.072 22:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.072 [2024-11-26 22:52:38.991226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.072 "name": "Existed_Raid", 00:08:00.072 "uuid": "19ce521c-7e35-4957-a1e5-c1b61917e14f", 00:08:00.072 "strip_size_kb": 0, 00:08:00.072 "state": "online", 00:08:00.072 "raid_level": "raid1", 00:08:00.072 "superblock": true, 00:08:00.072 "num_base_bdevs": 2, 00:08:00.072 "num_base_bdevs_discovered": 1, 00:08:00.072 "num_base_bdevs_operational": 1, 00:08:00.072 "base_bdevs_list": [ 00:08:00.072 { 00:08:00.072 "name": null, 00:08:00.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.072 "is_configured": false, 00:08:00.072 "data_offset": 0, 00:08:00.072 "data_size": 63488 00:08:00.072 }, 00:08:00.072 { 00:08:00.072 "name": "BaseBdev2", 00:08:00.072 "uuid": "36b51ee0-8f4a-440b-a071-586095c01148", 00:08:00.072 "is_configured": true, 00:08:00.072 "data_offset": 2048, 00:08:00.072 "data_size": 63488 00:08:00.072 } 00:08:00.072 ] 00:08:00.072 }' 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.072 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 [2024-11-26 22:52:39.538589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.641 [2024-11-26 22:52:39.538712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.641 [2024-11-26 22:52:39.550480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.641 [2024-11-26 22:52:39.550579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.641 [2024-11-26 22:52:39.550628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75833 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75833 ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75833 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75833 00:08:00.641 killing process with pid 75833 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75833' 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75833 00:08:00.641 [2024-11-26 22:52:39.649807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.641 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75833 00:08:00.641 [2024-11-26 22:52:39.650788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.902 ************************************ 00:08:00.902 END TEST raid_state_function_test_sb 00:08:00.902 ************************************ 00:08:00.902 22:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.902 00:08:00.902 real 0m3.836s 00:08:00.902 user 0m6.010s 00:08:00.902 sys 0m0.816s 00:08:00.902 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.902 22:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 22:52:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:00.902 22:52:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.902 22:52:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.902 22:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 ************************************ 00:08:00.902 START TEST raid_superblock_test 00:08:00.902 ************************************ 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76074 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76074 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76074 ']' 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.902 22:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.162 [2024-11-26 22:52:40.040190] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:01.163 [2024-11-26 22:52:40.040449] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:08:01.163 [2024-11-26 22:52:40.178580] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.163 [2024-11-26 22:52:40.202264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.163 [2024-11-26 22:52:40.229100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.163 [2024-11-26 22:52:40.270297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.163 [2024-11-26 22:52:40.270400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.733 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.733 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.733 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.733 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.734 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.994 malloc1 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.994 [2024-11-26 22:52:40.870492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.994 [2024-11-26 22:52:40.870572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.994 [2024-11-26 22:52:40.870602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.994 [2024-11-26 22:52:40.870611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.994 [2024-11-26 22:52:40.872627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.994 [2024-11-26 22:52:40.872742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.994 pt1 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.994 malloc2 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.994 [2024-11-26 22:52:40.898972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.994 [2024-11-26 22:52:40.899086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.994 [2024-11-26 22:52:40.899123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:01.994 [2024-11-26 22:52:40.899150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.994 [2024-11-26 22:52:40.901111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.994 [2024-11-26 22:52:40.901177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.994 pt2 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.994 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.994 [2024-11-26 22:52:40.910991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.994 [2024-11-26 22:52:40.912709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.994 [2024-11-26 22:52:40.912882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:01.994 [2024-11-26 22:52:40.912925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.995 [2024-11-26 22:52:40.913191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:01.995 [2024-11-26 22:52:40.913370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:01.995 [2024-11-26 22:52:40.913415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:01.995 [2024-11-26 22:52:40.913568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.995 "name": "raid_bdev1", 00:08:01.995 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:01.995 "strip_size_kb": 0, 00:08:01.995 "state": "online", 00:08:01.995 "raid_level": "raid1", 00:08:01.995 "superblock": true, 00:08:01.995 "num_base_bdevs": 2, 00:08:01.995 "num_base_bdevs_discovered": 2, 00:08:01.995 "num_base_bdevs_operational": 2, 00:08:01.995 "base_bdevs_list": [ 00:08:01.995 { 00:08:01.995 "name": "pt1", 00:08:01.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.995 "is_configured": true, 00:08:01.995 "data_offset": 2048, 00:08:01.995 "data_size": 63488 00:08:01.995 }, 00:08:01.995 { 00:08:01.995 "name": "pt2", 00:08:01.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.995 "is_configured": true, 00:08:01.995 "data_offset": 2048, 00:08:01.995 "data_size": 63488 00:08:01.995 } 00:08:01.995 ] 00:08:01.995 }' 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.995 22:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.255 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 [2024-11-26 22:52:41.391426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.515 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.515 "name": "raid_bdev1", 00:08:02.515 "aliases": [ 00:08:02.515 "a30df291-7edb-41d5-8e05-6f57ddf0c30a" 00:08:02.515 ], 00:08:02.515 "product_name": "Raid Volume", 00:08:02.515 "block_size": 512, 00:08:02.515 "num_blocks": 63488, 00:08:02.515 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:02.515 "assigned_rate_limits": { 00:08:02.515 "rw_ios_per_sec": 0, 00:08:02.515 "rw_mbytes_per_sec": 0, 00:08:02.515 "r_mbytes_per_sec": 0, 00:08:02.515 "w_mbytes_per_sec": 0 00:08:02.515 }, 00:08:02.515 "claimed": false, 00:08:02.515 "zoned": false, 00:08:02.515 "supported_io_types": { 00:08:02.515 "read": true, 00:08:02.515 "write": true, 00:08:02.515 "unmap": false, 00:08:02.515 "flush": false, 00:08:02.515 "reset": true, 00:08:02.515 "nvme_admin": false, 00:08:02.515 "nvme_io": false, 00:08:02.515 "nvme_io_md": false, 00:08:02.515 "write_zeroes": true, 00:08:02.515 "zcopy": false, 00:08:02.515 "get_zone_info": false, 00:08:02.515 "zone_management": false, 00:08:02.515 "zone_append": false, 00:08:02.515 "compare": false, 00:08:02.515 "compare_and_write": false, 00:08:02.515 "abort": false, 00:08:02.515 "seek_hole": false, 00:08:02.515 "seek_data": false, 00:08:02.515 "copy": false, 00:08:02.515 "nvme_iov_md": false 00:08:02.515 }, 00:08:02.515 "memory_domains": [ 00:08:02.515 { 00:08:02.515 "dma_device_id": "system", 00:08:02.515 "dma_device_type": 1 00:08:02.515 }, 00:08:02.515 { 00:08:02.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.515 "dma_device_type": 2 00:08:02.515 }, 00:08:02.515 { 00:08:02.515 "dma_device_id": "system", 00:08:02.515 "dma_device_type": 1 00:08:02.515 }, 00:08:02.515 { 00:08:02.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.515 "dma_device_type": 2 00:08:02.515 } 00:08:02.515 ], 00:08:02.515 "driver_specific": { 00:08:02.516 "raid": { 00:08:02.516 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:02.516 "strip_size_kb": 0, 00:08:02.516 "state": "online", 00:08:02.516 "raid_level": "raid1", 00:08:02.516 "superblock": true, 00:08:02.516 "num_base_bdevs": 2, 00:08:02.516 "num_base_bdevs_discovered": 2, 00:08:02.516 "num_base_bdevs_operational": 2, 00:08:02.516 "base_bdevs_list": [ 00:08:02.516 { 00:08:02.516 "name": "pt1", 00:08:02.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.516 "is_configured": true, 00:08:02.516 "data_offset": 2048, 00:08:02.516 "data_size": 63488 00:08:02.516 }, 00:08:02.516 { 00:08:02.516 "name": "pt2", 00:08:02.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.516 "is_configured": true, 00:08:02.516 "data_offset": 2048, 00:08:02.516 "data_size": 63488 00:08:02.516 } 00:08:02.516 ] 00:08:02.516 } 00:08:02.516 } 00:08:02.516 }' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.516 pt2' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.516 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.516 [2024-11-26 22:52:41.627392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a30df291-7edb-41d5-8e05-6f57ddf0c30a 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a30df291-7edb-41d5-8e05-6f57ddf0c30a ']' 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 [2024-11-26 22:52:41.655156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.783 [2024-11-26 22:52:41.655183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.783 [2024-11-26 22:52:41.655279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.783 [2024-11-26 22:52:41.655342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.783 [2024-11-26 22:52:41.655353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.783 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.784 [2024-11-26 22:52:41.779228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.784 [2024-11-26 22:52:41.781017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.784 [2024-11-26 22:52:41.781081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.784 [2024-11-26 22:52:41.781131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.784 [2024-11-26 22:52:41.781145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.784 [2024-11-26 22:52:41.781156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:02.784 request: 00:08:02.784 { 00:08:02.784 "name": "raid_bdev1", 00:08:02.784 "raid_level": "raid1", 00:08:02.784 "base_bdevs": [ 00:08:02.784 "malloc1", 00:08:02.784 "malloc2" 00:08:02.784 ], 00:08:02.784 "superblock": false, 00:08:02.784 "method": "bdev_raid_create", 00:08:02.784 "req_id": 1 00:08:02.784 } 00:08:02.784 Got JSON-RPC error response 00:08:02.784 response: 00:08:02.784 { 00:08:02.784 "code": -17, 00:08:02.784 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.784 } 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.784 [2024-11-26 22:52:41.843224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.784 [2024-11-26 22:52:41.843346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.784 [2024-11-26 22:52:41.843383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.784 [2024-11-26 22:52:41.843421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.784 [2024-11-26 22:52:41.845551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.784 [2024-11-26 22:52:41.845622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.784 [2024-11-26 22:52:41.845716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.784 [2024-11-26 22:52:41.845781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.784 pt1 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.784 "name": "raid_bdev1", 00:08:02.784 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:02.784 "strip_size_kb": 0, 00:08:02.784 "state": "configuring", 00:08:02.784 "raid_level": "raid1", 00:08:02.784 "superblock": true, 00:08:02.784 "num_base_bdevs": 2, 00:08:02.784 "num_base_bdevs_discovered": 1, 00:08:02.784 "num_base_bdevs_operational": 2, 00:08:02.784 "base_bdevs_list": [ 00:08:02.784 { 00:08:02.784 "name": "pt1", 00:08:02.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.784 "is_configured": true, 00:08:02.784 "data_offset": 2048, 00:08:02.784 "data_size": 63488 00:08:02.784 }, 00:08:02.784 { 00:08:02.784 "name": null, 00:08:02.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.784 "is_configured": false, 00:08:02.784 "data_offset": 2048, 00:08:02.784 "data_size": 63488 00:08:02.784 } 00:08:02.784 ] 00:08:02.784 }' 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.784 22:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 [2024-11-26 22:52:42.295393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.394 [2024-11-26 22:52:42.295478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.394 [2024-11-26 22:52:42.295502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.394 [2024-11-26 22:52:42.295513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.394 [2024-11-26 22:52:42.295927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.394 [2024-11-26 22:52:42.295946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.394 [2024-11-26 22:52:42.296026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.394 [2024-11-26 22:52:42.296051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.394 [2024-11-26 22:52:42.296140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:03.394 [2024-11-26 22:52:42.296150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.394 [2024-11-26 22:52:42.296404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:03.394 [2024-11-26 22:52:42.296525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:03.394 [2024-11-26 22:52:42.296533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:03.394 [2024-11-26 22:52:42.296638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.394 pt2 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.394 "name": "raid_bdev1", 00:08:03.394 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:03.394 "strip_size_kb": 0, 00:08:03.394 "state": "online", 00:08:03.394 "raid_level": "raid1", 00:08:03.394 "superblock": true, 00:08:03.394 "num_base_bdevs": 2, 00:08:03.394 "num_base_bdevs_discovered": 2, 00:08:03.394 "num_base_bdevs_operational": 2, 00:08:03.394 "base_bdevs_list": [ 00:08:03.394 { 00:08:03.394 "name": "pt1", 00:08:03.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.394 "is_configured": true, 00:08:03.394 "data_offset": 2048, 00:08:03.394 "data_size": 63488 00:08:03.394 }, 00:08:03.394 { 00:08:03.394 "name": "pt2", 00:08:03.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.394 "is_configured": true, 00:08:03.394 "data_offset": 2048, 00:08:03.394 "data_size": 63488 00:08:03.394 } 00:08:03.394 ] 00:08:03.394 }' 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.394 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 [2024-11-26 22:52:42.751782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.654 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.654 "name": "raid_bdev1", 00:08:03.654 "aliases": [ 00:08:03.654 "a30df291-7edb-41d5-8e05-6f57ddf0c30a" 00:08:03.654 ], 00:08:03.654 "product_name": "Raid Volume", 00:08:03.654 "block_size": 512, 00:08:03.654 "num_blocks": 63488, 00:08:03.654 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:03.654 "assigned_rate_limits": { 00:08:03.654 "rw_ios_per_sec": 0, 00:08:03.654 "rw_mbytes_per_sec": 0, 00:08:03.654 "r_mbytes_per_sec": 0, 00:08:03.654 "w_mbytes_per_sec": 0 00:08:03.654 }, 00:08:03.654 "claimed": false, 00:08:03.654 "zoned": false, 00:08:03.654 "supported_io_types": { 00:08:03.654 "read": true, 00:08:03.654 "write": true, 00:08:03.654 "unmap": false, 00:08:03.654 "flush": false, 00:08:03.654 "reset": true, 00:08:03.654 "nvme_admin": false, 00:08:03.654 "nvme_io": false, 00:08:03.654 "nvme_io_md": false, 00:08:03.654 "write_zeroes": true, 00:08:03.654 "zcopy": false, 00:08:03.654 "get_zone_info": false, 00:08:03.654 "zone_management": false, 00:08:03.654 "zone_append": false, 00:08:03.654 "compare": false, 00:08:03.654 "compare_and_write": false, 00:08:03.654 "abort": false, 00:08:03.654 "seek_hole": false, 00:08:03.654 "seek_data": false, 00:08:03.654 "copy": false, 00:08:03.654 "nvme_iov_md": false 00:08:03.654 }, 00:08:03.654 "memory_domains": [ 00:08:03.654 { 00:08:03.654 "dma_device_id": "system", 00:08:03.654 "dma_device_type": 1 00:08:03.654 }, 00:08:03.654 { 00:08:03.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.654 "dma_device_type": 2 00:08:03.654 }, 00:08:03.654 { 00:08:03.654 "dma_device_id": "system", 00:08:03.654 "dma_device_type": 1 00:08:03.654 }, 00:08:03.654 { 00:08:03.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.654 "dma_device_type": 2 00:08:03.654 } 00:08:03.655 ], 00:08:03.655 "driver_specific": { 00:08:03.655 "raid": { 00:08:03.655 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:03.655 "strip_size_kb": 0, 00:08:03.655 "state": "online", 00:08:03.655 "raid_level": "raid1", 00:08:03.655 "superblock": true, 00:08:03.655 "num_base_bdevs": 2, 00:08:03.655 "num_base_bdevs_discovered": 2, 00:08:03.655 "num_base_bdevs_operational": 2, 00:08:03.655 "base_bdevs_list": [ 00:08:03.655 { 00:08:03.655 "name": "pt1", 00:08:03.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.655 "is_configured": true, 00:08:03.655 "data_offset": 2048, 00:08:03.655 "data_size": 63488 00:08:03.655 }, 00:08:03.655 { 00:08:03.655 "name": "pt2", 00:08:03.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.655 "is_configured": true, 00:08:03.655 "data_offset": 2048, 00:08:03.655 "data_size": 63488 00:08:03.655 } 00:08:03.655 ] 00:08:03.655 } 00:08:03.655 } 00:08:03.655 }' 00:08:03.655 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.914 pt2' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.914 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:03.915 [2024-11-26 22:52:42.955771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a30df291-7edb-41d5-8e05-6f57ddf0c30a '!=' a30df291-7edb-41d5-8e05-6f57ddf0c30a ']' 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:03.915 22:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.915 [2024-11-26 22:52:43.007570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.915 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.175 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.175 "name": "raid_bdev1", 00:08:04.175 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:04.175 "strip_size_kb": 0, 00:08:04.175 "state": "online", 00:08:04.175 "raid_level": "raid1", 00:08:04.175 "superblock": true, 00:08:04.175 "num_base_bdevs": 2, 00:08:04.175 "num_base_bdevs_discovered": 1, 00:08:04.175 "num_base_bdevs_operational": 1, 00:08:04.175 "base_bdevs_list": [ 00:08:04.175 { 00:08:04.175 "name": null, 00:08:04.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.175 "is_configured": false, 00:08:04.175 "data_offset": 0, 00:08:04.175 "data_size": 63488 00:08:04.175 }, 00:08:04.175 { 00:08:04.175 "name": "pt2", 00:08:04.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.175 "is_configured": true, 00:08:04.175 "data_offset": 2048, 00:08:04.175 "data_size": 63488 00:08:04.175 } 00:08:04.175 ] 00:08:04.175 }' 00:08:04.175 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.175 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 [2024-11-26 22:52:43.435703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.435 [2024-11-26 22:52:43.435814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.435 [2024-11-26 22:52:43.435916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.435 [2024-11-26 22:52:43.435980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.435 [2024-11-26 22:52:43.436057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.436 [2024-11-26 22:52:43.511685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.436 [2024-11-26 22:52:43.511741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.436 [2024-11-26 22:52:43.511775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:04.436 [2024-11-26 22:52:43.511785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.436 [2024-11-26 22:52:43.513914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.436 [2024-11-26 22:52:43.513991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.436 [2024-11-26 22:52:43.514068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.436 [2024-11-26 22:52:43.514104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.436 [2024-11-26 22:52:43.514186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.436 [2024-11-26 22:52:43.514196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.436 [2024-11-26 22:52:43.514424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:04.436 [2024-11-26 22:52:43.514541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.436 [2024-11-26 22:52:43.514551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.436 [2024-11-26 22:52:43.514657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.436 pt2 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.436 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.696 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.696 "name": "raid_bdev1", 00:08:04.696 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:04.696 "strip_size_kb": 0, 00:08:04.696 "state": "online", 00:08:04.696 "raid_level": "raid1", 00:08:04.696 "superblock": true, 00:08:04.696 "num_base_bdevs": 2, 00:08:04.696 "num_base_bdevs_discovered": 1, 00:08:04.696 "num_base_bdevs_operational": 1, 00:08:04.696 "base_bdevs_list": [ 00:08:04.696 { 00:08:04.696 "name": null, 00:08:04.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.696 "is_configured": false, 00:08:04.696 "data_offset": 2048, 00:08:04.696 "data_size": 63488 00:08:04.696 }, 00:08:04.696 { 00:08:04.696 "name": "pt2", 00:08:04.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.696 "is_configured": true, 00:08:04.696 "data_offset": 2048, 00:08:04.696 "data_size": 63488 00:08:04.696 } 00:08:04.696 ] 00:08:04.696 }' 00:08:04.696 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.696 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 [2024-11-26 22:52:43.979844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.957 [2024-11-26 22:52:43.979927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.957 [2024-11-26 22:52:43.980014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.957 [2024-11-26 22:52:43.980079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.957 [2024-11-26 22:52:43.980123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:04.957 22:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 [2024-11-26 22:52:44.039834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.957 [2024-11-26 22:52:44.039934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.957 [2024-11-26 22:52:44.039973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:04.957 [2024-11-26 22:52:44.040000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.957 [2024-11-26 22:52:44.042107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.957 [2024-11-26 22:52:44.042203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.957 [2024-11-26 22:52:44.042310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:04.957 [2024-11-26 22:52:44.042360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:04.957 [2024-11-26 22:52:44.042512] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:04.957 [2024-11-26 22:52:44.042568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.957 [2024-11-26 22:52:44.042615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:08:04.957 [2024-11-26 22:52:44.042688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.957 [2024-11-26 22:52:44.042801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:04.957 [2024-11-26 22:52:44.042842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.957 [2024-11-26 22:52:44.043084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.957 [2024-11-26 22:52:44.043241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:04.957 [2024-11-26 22:52:44.043305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:04.957 [2024-11-26 22:52:44.043453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.957 pt1 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.957 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.217 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.217 "name": "raid_bdev1", 00:08:05.217 "uuid": "a30df291-7edb-41d5-8e05-6f57ddf0c30a", 00:08:05.217 "strip_size_kb": 0, 00:08:05.217 "state": "online", 00:08:05.217 "raid_level": "raid1", 00:08:05.217 "superblock": true, 00:08:05.218 "num_base_bdevs": 2, 00:08:05.218 "num_base_bdevs_discovered": 1, 00:08:05.218 "num_base_bdevs_operational": 1, 00:08:05.218 "base_bdevs_list": [ 00:08:05.218 { 00:08:05.218 "name": null, 00:08:05.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.218 "is_configured": false, 00:08:05.218 "data_offset": 2048, 00:08:05.218 "data_size": 63488 00:08:05.218 }, 00:08:05.218 { 00:08:05.218 "name": "pt2", 00:08:05.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.218 "is_configured": true, 00:08:05.218 "data_offset": 2048, 00:08:05.218 "data_size": 63488 00:08:05.218 } 00:08:05.218 ] 00:08:05.218 }' 00:08:05.218 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.218 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.477 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.478 [2024-11-26 22:52:44.532193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a30df291-7edb-41d5-8e05-6f57ddf0c30a '!=' a30df291-7edb-41d5-8e05-6f57ddf0c30a ']' 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76074 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76074 ']' 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76074 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.478 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76074 00:08:05.738 killing process with pid 76074 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76074' 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76074 00:08:05.738 [2024-11-26 22:52:44.613306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.738 [2024-11-26 22:52:44.613404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.738 [2024-11-26 22:52:44.613452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.738 [2024-11-26 22:52:44.613464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76074 00:08:05.738 [2024-11-26 22:52:44.636233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:05.738 00:08:05.738 real 0m4.915s 00:08:05.738 user 0m7.994s 00:08:05.738 sys 0m1.081s 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.738 ************************************ 00:08:05.738 END TEST raid_superblock_test 00:08:05.738 ************************************ 00:08:05.738 22:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.998 22:52:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:05.998 22:52:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.998 22:52:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.998 22:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.998 ************************************ 00:08:05.998 START TEST raid_read_error_test 00:08:05.998 ************************************ 00:08:05.998 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:05.998 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:05.998 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.998 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iwiSOpKRUB 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76393 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76393 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76393 ']' 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.999 22:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.999 [2024-11-26 22:52:45.051074] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:05.999 [2024-11-26 22:52:45.051282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76393 ] 00:08:06.259 [2024-11-26 22:52:45.186563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.259 [2024-11-26 22:52:45.226506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.259 [2024-11-26 22:52:45.252360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.259 [2024-11-26 22:52:45.294205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.259 [2024-11-26 22:52:45.294334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.828 BaseBdev1_malloc 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.828 true 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.828 [2024-11-26 22:52:45.910441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.828 [2024-11-26 22:52:45.910494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.828 [2024-11-26 22:52:45.910512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.828 [2024-11-26 22:52:45.910524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.828 [2024-11-26 22:52:45.912570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.828 [2024-11-26 22:52:45.912664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.828 BaseBdev1 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.828 BaseBdev2_malloc 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.829 true 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.829 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.829 [2024-11-26 22:52:45.950775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.829 [2024-11-26 22:52:45.950834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.829 [2024-11-26 22:52:45.950852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.829 [2024-11-26 22:52:45.950863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.829 [2024-11-26 22:52:45.952979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.829 [2024-11-26 22:52:45.953017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.089 BaseBdev2 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.089 [2024-11-26 22:52:45.962798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.089 [2024-11-26 22:52:45.964665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.089 [2024-11-26 22:52:45.964862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:07.089 [2024-11-26 22:52:45.964899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.089 [2024-11-26 22:52:45.965159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:07.089 [2024-11-26 22:52:45.965348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:07.089 [2024-11-26 22:52:45.965391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:07.089 [2024-11-26 22:52:45.965558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.089 22:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.089 22:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.089 "name": "raid_bdev1", 00:08:07.089 "uuid": "0b8357bb-518f-4aca-8da9-55d60432a95d", 00:08:07.089 "strip_size_kb": 0, 00:08:07.089 "state": "online", 00:08:07.089 "raid_level": "raid1", 00:08:07.089 "superblock": true, 00:08:07.089 "num_base_bdevs": 2, 00:08:07.089 "num_base_bdevs_discovered": 2, 00:08:07.089 "num_base_bdevs_operational": 2, 00:08:07.089 "base_bdevs_list": [ 00:08:07.089 { 00:08:07.089 "name": "BaseBdev1", 00:08:07.089 "uuid": "bb975d15-d097-5a36-9b16-f3efbda6d916", 00:08:07.089 "is_configured": true, 00:08:07.089 "data_offset": 2048, 00:08:07.089 "data_size": 63488 00:08:07.089 }, 00:08:07.089 { 00:08:07.089 "name": "BaseBdev2", 00:08:07.089 "uuid": "dd8d1f01-0b21-5faf-be4e-2b096809d61f", 00:08:07.089 "is_configured": true, 00:08:07.089 "data_offset": 2048, 00:08:07.089 "data_size": 63488 00:08:07.089 } 00:08:07.089 ] 00:08:07.089 }' 00:08:07.089 22:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.089 22:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.350 22:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.350 22:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.610 [2024-11-26 22:52:46.495342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.550 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.551 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.551 "name": "raid_bdev1", 00:08:08.551 "uuid": "0b8357bb-518f-4aca-8da9-55d60432a95d", 00:08:08.551 "strip_size_kb": 0, 00:08:08.551 "state": "online", 00:08:08.551 "raid_level": "raid1", 00:08:08.551 "superblock": true, 00:08:08.551 "num_base_bdevs": 2, 00:08:08.551 "num_base_bdevs_discovered": 2, 00:08:08.551 "num_base_bdevs_operational": 2, 00:08:08.551 "base_bdevs_list": [ 00:08:08.551 { 00:08:08.551 "name": "BaseBdev1", 00:08:08.551 "uuid": "bb975d15-d097-5a36-9b16-f3efbda6d916", 00:08:08.551 "is_configured": true, 00:08:08.551 "data_offset": 2048, 00:08:08.551 "data_size": 63488 00:08:08.551 }, 00:08:08.551 { 00:08:08.551 "name": "BaseBdev2", 00:08:08.551 "uuid": "dd8d1f01-0b21-5faf-be4e-2b096809d61f", 00:08:08.551 "is_configured": true, 00:08:08.551 "data_offset": 2048, 00:08:08.551 "data_size": 63488 00:08:08.551 } 00:08:08.551 ] 00:08:08.551 }' 00:08:08.551 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.551 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.811 [2024-11-26 22:52:47.885714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.811 [2024-11-26 22:52:47.885823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.811 [2024-11-26 22:52:47.888439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.811 [2024-11-26 22:52:47.888517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.811 [2024-11-26 22:52:47.888617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.811 [2024-11-26 22:52:47.888670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:08.811 { 00:08:08.811 "results": [ 00:08:08.811 { 00:08:08.811 "job": "raid_bdev1", 00:08:08.811 "core_mask": "0x1", 00:08:08.811 "workload": "randrw", 00:08:08.811 "percentage": 50, 00:08:08.811 "status": "finished", 00:08:08.811 "queue_depth": 1, 00:08:08.811 "io_size": 131072, 00:08:08.811 "runtime": 1.388786, 00:08:08.811 "iops": 19942.59734761151, 00:08:08.811 "mibps": 2492.824668451439, 00:08:08.811 "io_failed": 0, 00:08:08.811 "io_timeout": 0, 00:08:08.811 "avg_latency_us": 47.57494034546132, 00:08:08.811 "min_latency_us": 21.97855835439728, 00:08:08.811 "max_latency_us": 1577.993550074087 00:08:08.811 } 00:08:08.811 ], 00:08:08.811 "core_count": 1 00:08:08.811 } 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76393 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76393 ']' 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76393 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.811 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76393 00:08:09.071 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.071 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.071 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76393' 00:08:09.071 killing process with pid 76393 00:08:09.071 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76393 00:08:09.071 [2024-11-26 22:52:47.938538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.071 22:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76393 00:08:09.071 [2024-11-26 22:52:47.954313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iwiSOpKRUB 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:09.071 00:08:09.071 real 0m3.242s 00:08:09.071 user 0m4.123s 00:08:09.071 sys 0m0.510s 00:08:09.071 ************************************ 00:08:09.071 END TEST raid_read_error_test 00:08:09.071 ************************************ 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.071 22:52:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.332 22:52:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:09.332 22:52:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.332 22:52:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.332 22:52:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.332 ************************************ 00:08:09.332 START TEST raid_write_error_test 00:08:09.332 ************************************ 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qgbojX10R7 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76522 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76522 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76522 ']' 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.332 22:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.332 [2024-11-26 22:52:48.354393] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:09.332 [2024-11-26 22:52:48.354564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76522 ] 00:08:09.593 [2024-11-26 22:52:48.488701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.593 [2024-11-26 22:52:48.525906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.593 [2024-11-26 22:52:48.550852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.593 [2024-11-26 22:52:48.592718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.593 [2024-11-26 22:52:48.592753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 BaseBdev1_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 true 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 [2024-11-26 22:52:49.188827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:10.163 [2024-11-26 22:52:49.188879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.163 [2024-11-26 22:52:49.188895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:10.163 [2024-11-26 22:52:49.188908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.163 [2024-11-26 22:52:49.191003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.163 [2024-11-26 22:52:49.191046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:10.163 BaseBdev1 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 BaseBdev2_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 true 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 [2024-11-26 22:52:49.229283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:10.163 [2024-11-26 22:52:49.229328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.163 [2024-11-26 22:52:49.229342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:10.163 [2024-11-26 22:52:49.229352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.163 [2024-11-26 22:52:49.231348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.163 [2024-11-26 22:52:49.231440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:10.163 BaseBdev2 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.163 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 [2024-11-26 22:52:49.241318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.163 [2024-11-26 22:52:49.243059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.164 [2024-11-26 22:52:49.243214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:10.164 [2024-11-26 22:52:49.243229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:10.164 [2024-11-26 22:52:49.243462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:10.164 [2024-11-26 22:52:49.243616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:10.164 [2024-11-26 22:52:49.243625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:10.164 [2024-11-26 22:52:49.243730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.164 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.423 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.423 "name": "raid_bdev1", 00:08:10.423 "uuid": "9077d858-8e48-46c0-8f5a-290c84f734e7", 00:08:10.423 "strip_size_kb": 0, 00:08:10.423 "state": "online", 00:08:10.423 "raid_level": "raid1", 00:08:10.423 "superblock": true, 00:08:10.423 "num_base_bdevs": 2, 00:08:10.423 "num_base_bdevs_discovered": 2, 00:08:10.423 "num_base_bdevs_operational": 2, 00:08:10.423 "base_bdevs_list": [ 00:08:10.423 { 00:08:10.423 "name": "BaseBdev1", 00:08:10.423 "uuid": "7f9258d0-c3f9-5277-bd06-10758583fadc", 00:08:10.423 "is_configured": true, 00:08:10.423 "data_offset": 2048, 00:08:10.423 "data_size": 63488 00:08:10.423 }, 00:08:10.423 { 00:08:10.423 "name": "BaseBdev2", 00:08:10.423 "uuid": "feaeddf2-e8b0-5917-84c2-ad05282d0219", 00:08:10.423 "is_configured": true, 00:08:10.423 "data_offset": 2048, 00:08:10.423 "data_size": 63488 00:08:10.423 } 00:08:10.423 ] 00:08:10.423 }' 00:08:10.423 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.423 22:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.683 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:10.683 22:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:10.683 [2024-11-26 22:52:49.721843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.623 [2024-11-26 22:52:50.652194] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:11.623 [2024-11-26 22:52:50.652370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.623 [2024-11-26 22:52:50.652603] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000067d0 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.623 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.624 "name": "raid_bdev1", 00:08:11.624 "uuid": "9077d858-8e48-46c0-8f5a-290c84f734e7", 00:08:11.624 "strip_size_kb": 0, 00:08:11.624 "state": "online", 00:08:11.624 "raid_level": "raid1", 00:08:11.624 "superblock": true, 00:08:11.624 "num_base_bdevs": 2, 00:08:11.624 "num_base_bdevs_discovered": 1, 00:08:11.624 "num_base_bdevs_operational": 1, 00:08:11.624 "base_bdevs_list": [ 00:08:11.624 { 00:08:11.624 "name": null, 00:08:11.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.624 "is_configured": false, 00:08:11.624 "data_offset": 0, 00:08:11.624 "data_size": 63488 00:08:11.624 }, 00:08:11.624 { 00:08:11.624 "name": "BaseBdev2", 00:08:11.624 "uuid": "feaeddf2-e8b0-5917-84c2-ad05282d0219", 00:08:11.624 "is_configured": true, 00:08:11.624 "data_offset": 2048, 00:08:11.624 "data_size": 63488 00:08:11.624 } 00:08:11.624 ] 00:08:11.624 }' 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.624 22:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.192 [2024-11-26 22:52:51.131324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.192 [2024-11-26 22:52:51.131420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.192 [2024-11-26 22:52:51.133874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.192 [2024-11-26 22:52:51.133917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.192 [2024-11-26 22:52:51.133968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.192 [2024-11-26 22:52:51.133982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.192 { 00:08:12.192 "results": [ 00:08:12.192 { 00:08:12.192 "job": "raid_bdev1", 00:08:12.192 "core_mask": "0x1", 00:08:12.192 "workload": "randrw", 00:08:12.192 "percentage": 50, 00:08:12.192 "status": "finished", 00:08:12.192 "queue_depth": 1, 00:08:12.192 "io_size": 131072, 00:08:12.192 "runtime": 1.407552, 00:08:12.192 "iops": 23296.47501477743, 00:08:12.192 "mibps": 2912.0593768471786, 00:08:12.192 "io_failed": 0, 00:08:12.192 "io_timeout": 0, 00:08:12.192 "avg_latency_us": 40.329295190493596, 00:08:12.192 "min_latency_us": 21.53229321014556, 00:08:12.192 "max_latency_us": 1392.3472500653709 00:08:12.192 } 00:08:12.192 ], 00:08:12.192 "core_count": 1 00:08:12.192 } 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76522 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76522 ']' 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76522 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76522 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76522' 00:08:12.192 killing process with pid 76522 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76522 00:08:12.192 [2024-11-26 22:52:51.168465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.192 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76522 00:08:12.192 [2024-11-26 22:52:51.183063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qgbojX10R7 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.451 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:12.452 22:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:12.452 00:08:12.452 real 0m3.155s 00:08:12.452 user 0m3.971s 00:08:12.452 sys 0m0.501s 00:08:12.452 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.452 22:52:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.452 ************************************ 00:08:12.452 END TEST raid_write_error_test 00:08:12.452 ************************************ 00:08:12.452 22:52:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:12.452 22:52:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:12.452 22:52:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:12.452 22:52:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.452 22:52:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.452 22:52:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.452 ************************************ 00:08:12.452 START TEST raid_state_function_test 00:08:12.452 ************************************ 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:12.452 Process raid pid: 76649 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76649 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76649' 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76649 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76649 ']' 00:08:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.452 22:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.711 [2024-11-26 22:52:51.579498] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:12.711 [2024-11-26 22:52:51.579721] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.711 [2024-11-26 22:52:51.721086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.711 [2024-11-26 22:52:51.758623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.711 [2024-11-26 22:52:51.783696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.711 [2024-11-26 22:52:51.825132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.711 [2024-11-26 22:52:51.825167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.286 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.286 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.286 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.286 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.286 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.286 [2024-11-26 22:52:52.404886] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.286 [2024-11-26 22:52:52.404936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.286 [2024-11-26 22:52:52.404958] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.286 [2024-11-26 22:52:52.404967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.286 [2024-11-26 22:52:52.404979] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.286 [2024-11-26 22:52:52.404986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.552 "name": "Existed_Raid", 00:08:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.552 "strip_size_kb": 64, 00:08:13.552 "state": "configuring", 00:08:13.552 "raid_level": "raid0", 00:08:13.552 "superblock": false, 00:08:13.552 "num_base_bdevs": 3, 00:08:13.552 "num_base_bdevs_discovered": 0, 00:08:13.552 "num_base_bdevs_operational": 3, 00:08:13.552 "base_bdevs_list": [ 00:08:13.552 { 00:08:13.552 "name": "BaseBdev1", 00:08:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.552 "is_configured": false, 00:08:13.552 "data_offset": 0, 00:08:13.552 "data_size": 0 00:08:13.552 }, 00:08:13.552 { 00:08:13.552 "name": "BaseBdev2", 00:08:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.552 "is_configured": false, 00:08:13.552 "data_offset": 0, 00:08:13.552 "data_size": 0 00:08:13.552 }, 00:08:13.552 { 00:08:13.552 "name": "BaseBdev3", 00:08:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.552 "is_configured": false, 00:08:13.552 "data_offset": 0, 00:08:13.552 "data_size": 0 00:08:13.552 } 00:08:13.552 ] 00:08:13.552 }' 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.552 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.811 [2024-11-26 22:52:52.852893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.811 [2024-11-26 22:52:52.852967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.811 [2024-11-26 22:52:52.860926] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.811 [2024-11-26 22:52:52.861000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.811 [2024-11-26 22:52:52.861028] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.811 [2024-11-26 22:52:52.861048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.811 [2024-11-26 22:52:52.861067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.811 [2024-11-26 22:52:52.861085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.811 [2024-11-26 22:52:52.877637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.811 BaseBdev1 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.811 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.812 [ 00:08:13.812 { 00:08:13.812 "name": "BaseBdev1", 00:08:13.812 "aliases": [ 00:08:13.812 "5c75830b-e1f6-447a-ac88-07baab8816eb" 00:08:13.812 ], 00:08:13.812 "product_name": "Malloc disk", 00:08:13.812 "block_size": 512, 00:08:13.812 "num_blocks": 65536, 00:08:13.812 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:13.812 "assigned_rate_limits": { 00:08:13.812 "rw_ios_per_sec": 0, 00:08:13.812 "rw_mbytes_per_sec": 0, 00:08:13.812 "r_mbytes_per_sec": 0, 00:08:13.812 "w_mbytes_per_sec": 0 00:08:13.812 }, 00:08:13.812 "claimed": true, 00:08:13.812 "claim_type": "exclusive_write", 00:08:13.812 "zoned": false, 00:08:13.812 "supported_io_types": { 00:08:13.812 "read": true, 00:08:13.812 "write": true, 00:08:13.812 "unmap": true, 00:08:13.812 "flush": true, 00:08:13.812 "reset": true, 00:08:13.812 "nvme_admin": false, 00:08:13.812 "nvme_io": false, 00:08:13.812 "nvme_io_md": false, 00:08:13.812 "write_zeroes": true, 00:08:13.812 "zcopy": true, 00:08:13.812 "get_zone_info": false, 00:08:13.812 "zone_management": false, 00:08:13.812 "zone_append": false, 00:08:13.812 "compare": false, 00:08:13.812 "compare_and_write": false, 00:08:13.812 "abort": true, 00:08:13.812 "seek_hole": false, 00:08:13.812 "seek_data": false, 00:08:13.812 "copy": true, 00:08:13.812 "nvme_iov_md": false 00:08:13.812 }, 00:08:13.812 "memory_domains": [ 00:08:13.812 { 00:08:13.812 "dma_device_id": "system", 00:08:13.812 "dma_device_type": 1 00:08:13.812 }, 00:08:13.812 { 00:08:13.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.812 "dma_device_type": 2 00:08:13.812 } 00:08:13.812 ], 00:08:13.812 "driver_specific": {} 00:08:13.812 } 00:08:13.812 ] 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.812 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.071 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.071 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.071 "name": "Existed_Raid", 00:08:14.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.071 "strip_size_kb": 64, 00:08:14.071 "state": "configuring", 00:08:14.071 "raid_level": "raid0", 00:08:14.071 "superblock": false, 00:08:14.071 "num_base_bdevs": 3, 00:08:14.071 "num_base_bdevs_discovered": 1, 00:08:14.071 "num_base_bdevs_operational": 3, 00:08:14.071 "base_bdevs_list": [ 00:08:14.071 { 00:08:14.071 "name": "BaseBdev1", 00:08:14.071 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:14.071 "is_configured": true, 00:08:14.071 "data_offset": 0, 00:08:14.071 "data_size": 65536 00:08:14.071 }, 00:08:14.071 { 00:08:14.071 "name": "BaseBdev2", 00:08:14.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.071 "is_configured": false, 00:08:14.071 "data_offset": 0, 00:08:14.071 "data_size": 0 00:08:14.071 }, 00:08:14.071 { 00:08:14.071 "name": "BaseBdev3", 00:08:14.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.071 "is_configured": false, 00:08:14.071 "data_offset": 0, 00:08:14.071 "data_size": 0 00:08:14.071 } 00:08:14.071 ] 00:08:14.071 }' 00:08:14.071 22:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.071 22:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.330 [2024-11-26 22:52:53.353791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.330 [2024-11-26 22:52:53.353838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.330 [2024-11-26 22:52:53.365836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.330 [2024-11-26 22:52:53.367645] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.330 [2024-11-26 22:52:53.367678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.330 [2024-11-26 22:52:53.367690] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.330 [2024-11-26 22:52:53.367713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.330 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.330 "name": "Existed_Raid", 00:08:14.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.330 "strip_size_kb": 64, 00:08:14.330 "state": "configuring", 00:08:14.330 "raid_level": "raid0", 00:08:14.330 "superblock": false, 00:08:14.330 "num_base_bdevs": 3, 00:08:14.330 "num_base_bdevs_discovered": 1, 00:08:14.330 "num_base_bdevs_operational": 3, 00:08:14.330 "base_bdevs_list": [ 00:08:14.330 { 00:08:14.330 "name": "BaseBdev1", 00:08:14.330 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:14.330 "is_configured": true, 00:08:14.330 "data_offset": 0, 00:08:14.330 "data_size": 65536 00:08:14.330 }, 00:08:14.330 { 00:08:14.330 "name": "BaseBdev2", 00:08:14.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.330 "is_configured": false, 00:08:14.330 "data_offset": 0, 00:08:14.330 "data_size": 0 00:08:14.330 }, 00:08:14.330 { 00:08:14.330 "name": "BaseBdev3", 00:08:14.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.330 "is_configured": false, 00:08:14.330 "data_offset": 0, 00:08:14.330 "data_size": 0 00:08:14.330 } 00:08:14.330 ] 00:08:14.330 }' 00:08:14.331 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.331 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 [2024-11-26 22:52:53.760663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.899 BaseBdev2 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 [ 00:08:14.899 { 00:08:14.899 "name": "BaseBdev2", 00:08:14.899 "aliases": [ 00:08:14.899 "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9" 00:08:14.899 ], 00:08:14.899 "product_name": "Malloc disk", 00:08:14.899 "block_size": 512, 00:08:14.899 "num_blocks": 65536, 00:08:14.899 "uuid": "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9", 00:08:14.899 "assigned_rate_limits": { 00:08:14.899 "rw_ios_per_sec": 0, 00:08:14.899 "rw_mbytes_per_sec": 0, 00:08:14.899 "r_mbytes_per_sec": 0, 00:08:14.899 "w_mbytes_per_sec": 0 00:08:14.899 }, 00:08:14.899 "claimed": true, 00:08:14.899 "claim_type": "exclusive_write", 00:08:14.899 "zoned": false, 00:08:14.899 "supported_io_types": { 00:08:14.899 "read": true, 00:08:14.899 "write": true, 00:08:14.899 "unmap": true, 00:08:14.899 "flush": true, 00:08:14.899 "reset": true, 00:08:14.899 "nvme_admin": false, 00:08:14.899 "nvme_io": false, 00:08:14.899 "nvme_io_md": false, 00:08:14.899 "write_zeroes": true, 00:08:14.899 "zcopy": true, 00:08:14.899 "get_zone_info": false, 00:08:14.899 "zone_management": false, 00:08:14.899 "zone_append": false, 00:08:14.899 "compare": false, 00:08:14.899 "compare_and_write": false, 00:08:14.899 "abort": true, 00:08:14.899 "seek_hole": false, 00:08:14.899 "seek_data": false, 00:08:14.899 "copy": true, 00:08:14.899 "nvme_iov_md": false 00:08:14.899 }, 00:08:14.899 "memory_domains": [ 00:08:14.899 { 00:08:14.899 "dma_device_id": "system", 00:08:14.899 "dma_device_type": 1 00:08:14.899 }, 00:08:14.899 { 00:08:14.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.899 "dma_device_type": 2 00:08:14.899 } 00:08:14.899 ], 00:08:14.899 "driver_specific": {} 00:08:14.899 } 00:08:14.899 ] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.899 "name": "Existed_Raid", 00:08:14.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.899 "strip_size_kb": 64, 00:08:14.899 "state": "configuring", 00:08:14.899 "raid_level": "raid0", 00:08:14.899 "superblock": false, 00:08:14.899 "num_base_bdevs": 3, 00:08:14.899 "num_base_bdevs_discovered": 2, 00:08:14.899 "num_base_bdevs_operational": 3, 00:08:14.899 "base_bdevs_list": [ 00:08:14.899 { 00:08:14.899 "name": "BaseBdev1", 00:08:14.899 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:14.899 "is_configured": true, 00:08:14.899 "data_offset": 0, 00:08:14.899 "data_size": 65536 00:08:14.899 }, 00:08:14.899 { 00:08:14.899 "name": "BaseBdev2", 00:08:14.899 "uuid": "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9", 00:08:14.899 "is_configured": true, 00:08:14.899 "data_offset": 0, 00:08:14.899 "data_size": 65536 00:08:14.899 }, 00:08:14.899 { 00:08:14.899 "name": "BaseBdev3", 00:08:14.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.899 "is_configured": false, 00:08:14.899 "data_offset": 0, 00:08:14.899 "data_size": 0 00:08:14.899 } 00:08:14.899 ] 00:08:14.899 }' 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.899 22:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.158 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:15.158 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.158 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.417 [2024-11-26 22:52:54.300729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.417 [2024-11-26 22:52:54.300772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:15.417 [2024-11-26 22:52:54.300783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:15.417 [2024-11-26 22:52:54.301134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:15.417 [2024-11-26 22:52:54.301316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:15.417 [2024-11-26 22:52:54.301334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:15.417 [2024-11-26 22:52:54.301580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.417 BaseBdev3 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.417 [ 00:08:15.417 { 00:08:15.417 "name": "BaseBdev3", 00:08:15.417 "aliases": [ 00:08:15.417 "8966ed94-b995-48c0-ad01-3c5a4c5fce16" 00:08:15.417 ], 00:08:15.417 "product_name": "Malloc disk", 00:08:15.417 "block_size": 512, 00:08:15.417 "num_blocks": 65536, 00:08:15.417 "uuid": "8966ed94-b995-48c0-ad01-3c5a4c5fce16", 00:08:15.417 "assigned_rate_limits": { 00:08:15.417 "rw_ios_per_sec": 0, 00:08:15.417 "rw_mbytes_per_sec": 0, 00:08:15.417 "r_mbytes_per_sec": 0, 00:08:15.417 "w_mbytes_per_sec": 0 00:08:15.417 }, 00:08:15.417 "claimed": true, 00:08:15.417 "claim_type": "exclusive_write", 00:08:15.417 "zoned": false, 00:08:15.417 "supported_io_types": { 00:08:15.417 "read": true, 00:08:15.417 "write": true, 00:08:15.417 "unmap": true, 00:08:15.417 "flush": true, 00:08:15.417 "reset": true, 00:08:15.417 "nvme_admin": false, 00:08:15.417 "nvme_io": false, 00:08:15.417 "nvme_io_md": false, 00:08:15.417 "write_zeroes": true, 00:08:15.417 "zcopy": true, 00:08:15.417 "get_zone_info": false, 00:08:15.417 "zone_management": false, 00:08:15.417 "zone_append": false, 00:08:15.417 "compare": false, 00:08:15.417 "compare_and_write": false, 00:08:15.417 "abort": true, 00:08:15.417 "seek_hole": false, 00:08:15.417 "seek_data": false, 00:08:15.417 "copy": true, 00:08:15.417 "nvme_iov_md": false 00:08:15.417 }, 00:08:15.417 "memory_domains": [ 00:08:15.417 { 00:08:15.417 "dma_device_id": "system", 00:08:15.417 "dma_device_type": 1 00:08:15.417 }, 00:08:15.417 { 00:08:15.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.417 "dma_device_type": 2 00:08:15.417 } 00:08:15.417 ], 00:08:15.417 "driver_specific": {} 00:08:15.417 } 00:08:15.417 ] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.417 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.417 "name": "Existed_Raid", 00:08:15.417 "uuid": "8f52d2ac-94e4-4df2-9623-ee9b2225c596", 00:08:15.417 "strip_size_kb": 64, 00:08:15.417 "state": "online", 00:08:15.417 "raid_level": "raid0", 00:08:15.417 "superblock": false, 00:08:15.417 "num_base_bdevs": 3, 00:08:15.417 "num_base_bdevs_discovered": 3, 00:08:15.417 "num_base_bdevs_operational": 3, 00:08:15.417 "base_bdevs_list": [ 00:08:15.417 { 00:08:15.417 "name": "BaseBdev1", 00:08:15.417 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:15.417 "is_configured": true, 00:08:15.417 "data_offset": 0, 00:08:15.417 "data_size": 65536 00:08:15.417 }, 00:08:15.417 { 00:08:15.417 "name": "BaseBdev2", 00:08:15.418 "uuid": "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9", 00:08:15.418 "is_configured": true, 00:08:15.418 "data_offset": 0, 00:08:15.418 "data_size": 65536 00:08:15.418 }, 00:08:15.418 { 00:08:15.418 "name": "BaseBdev3", 00:08:15.418 "uuid": "8966ed94-b995-48c0-ad01-3c5a4c5fce16", 00:08:15.418 "is_configured": true, 00:08:15.418 "data_offset": 0, 00:08:15.418 "data_size": 65536 00:08:15.418 } 00:08:15.418 ] 00:08:15.418 }' 00:08:15.418 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.418 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 [2024-11-26 22:52:54.809184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.984 "name": "Existed_Raid", 00:08:15.984 "aliases": [ 00:08:15.984 "8f52d2ac-94e4-4df2-9623-ee9b2225c596" 00:08:15.984 ], 00:08:15.984 "product_name": "Raid Volume", 00:08:15.984 "block_size": 512, 00:08:15.984 "num_blocks": 196608, 00:08:15.984 "uuid": "8f52d2ac-94e4-4df2-9623-ee9b2225c596", 00:08:15.984 "assigned_rate_limits": { 00:08:15.984 "rw_ios_per_sec": 0, 00:08:15.984 "rw_mbytes_per_sec": 0, 00:08:15.984 "r_mbytes_per_sec": 0, 00:08:15.984 "w_mbytes_per_sec": 0 00:08:15.984 }, 00:08:15.984 "claimed": false, 00:08:15.984 "zoned": false, 00:08:15.984 "supported_io_types": { 00:08:15.984 "read": true, 00:08:15.984 "write": true, 00:08:15.984 "unmap": true, 00:08:15.984 "flush": true, 00:08:15.984 "reset": true, 00:08:15.984 "nvme_admin": false, 00:08:15.984 "nvme_io": false, 00:08:15.984 "nvme_io_md": false, 00:08:15.984 "write_zeroes": true, 00:08:15.984 "zcopy": false, 00:08:15.984 "get_zone_info": false, 00:08:15.984 "zone_management": false, 00:08:15.984 "zone_append": false, 00:08:15.984 "compare": false, 00:08:15.984 "compare_and_write": false, 00:08:15.984 "abort": false, 00:08:15.984 "seek_hole": false, 00:08:15.984 "seek_data": false, 00:08:15.984 "copy": false, 00:08:15.984 "nvme_iov_md": false 00:08:15.984 }, 00:08:15.984 "memory_domains": [ 00:08:15.984 { 00:08:15.984 "dma_device_id": "system", 00:08:15.984 "dma_device_type": 1 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.984 "dma_device_type": 2 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "dma_device_id": "system", 00:08:15.984 "dma_device_type": 1 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.984 "dma_device_type": 2 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "dma_device_id": "system", 00:08:15.984 "dma_device_type": 1 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.984 "dma_device_type": 2 00:08:15.984 } 00:08:15.984 ], 00:08:15.984 "driver_specific": { 00:08:15.984 "raid": { 00:08:15.984 "uuid": "8f52d2ac-94e4-4df2-9623-ee9b2225c596", 00:08:15.984 "strip_size_kb": 64, 00:08:15.984 "state": "online", 00:08:15.984 "raid_level": "raid0", 00:08:15.984 "superblock": false, 00:08:15.984 "num_base_bdevs": 3, 00:08:15.984 "num_base_bdevs_discovered": 3, 00:08:15.984 "num_base_bdevs_operational": 3, 00:08:15.984 "base_bdevs_list": [ 00:08:15.984 { 00:08:15.984 "name": "BaseBdev1", 00:08:15.984 "uuid": "5c75830b-e1f6-447a-ac88-07baab8816eb", 00:08:15.984 "is_configured": true, 00:08:15.984 "data_offset": 0, 00:08:15.984 "data_size": 65536 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "name": "BaseBdev2", 00:08:15.984 "uuid": "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9", 00:08:15.984 "is_configured": true, 00:08:15.984 "data_offset": 0, 00:08:15.984 "data_size": 65536 00:08:15.984 }, 00:08:15.984 { 00:08:15.984 "name": "BaseBdev3", 00:08:15.984 "uuid": "8966ed94-b995-48c0-ad01-3c5a4c5fce16", 00:08:15.984 "is_configured": true, 00:08:15.984 "data_offset": 0, 00:08:15.984 "data_size": 65536 00:08:15.984 } 00:08:15.984 ] 00:08:15.984 } 00:08:15.984 } 00:08:15.984 }' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.984 BaseBdev2 00:08:15.984 BaseBdev3' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.984 22:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.984 [2024-11-26 22:52:55.065012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.984 [2024-11-26 22:52:55.065042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.984 [2024-11-26 22:52:55.065097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.984 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.243 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.243 "name": "Existed_Raid", 00:08:16.243 "uuid": "8f52d2ac-94e4-4df2-9623-ee9b2225c596", 00:08:16.243 "strip_size_kb": 64, 00:08:16.243 "state": "offline", 00:08:16.243 "raid_level": "raid0", 00:08:16.243 "superblock": false, 00:08:16.243 "num_base_bdevs": 3, 00:08:16.243 "num_base_bdevs_discovered": 2, 00:08:16.243 "num_base_bdevs_operational": 2, 00:08:16.243 "base_bdevs_list": [ 00:08:16.243 { 00:08:16.243 "name": null, 00:08:16.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.243 "is_configured": false, 00:08:16.243 "data_offset": 0, 00:08:16.243 "data_size": 65536 00:08:16.243 }, 00:08:16.243 { 00:08:16.243 "name": "BaseBdev2", 00:08:16.243 "uuid": "0697b52a-a8b7-481e-ab7a-d9e0f59a9ce9", 00:08:16.243 "is_configured": true, 00:08:16.243 "data_offset": 0, 00:08:16.243 "data_size": 65536 00:08:16.243 }, 00:08:16.243 { 00:08:16.243 "name": "BaseBdev3", 00:08:16.243 "uuid": "8966ed94-b995-48c0-ad01-3c5a4c5fce16", 00:08:16.243 "is_configured": true, 00:08:16.243 "data_offset": 0, 00:08:16.243 "data_size": 65536 00:08:16.243 } 00:08:16.243 ] 00:08:16.243 }' 00:08:16.243 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.243 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.502 [2024-11-26 22:52:55.596457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.502 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 [2024-11-26 22:52:55.663327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:16.761 [2024-11-26 22:52:55.663429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 BaseBdev2 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.761 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 [ 00:08:16.761 { 00:08:16.761 "name": "BaseBdev2", 00:08:16.761 "aliases": [ 00:08:16.761 "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a" 00:08:16.761 ], 00:08:16.761 "product_name": "Malloc disk", 00:08:16.761 "block_size": 512, 00:08:16.761 "num_blocks": 65536, 00:08:16.761 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:16.761 "assigned_rate_limits": { 00:08:16.761 "rw_ios_per_sec": 0, 00:08:16.761 "rw_mbytes_per_sec": 0, 00:08:16.761 "r_mbytes_per_sec": 0, 00:08:16.761 "w_mbytes_per_sec": 0 00:08:16.761 }, 00:08:16.761 "claimed": false, 00:08:16.761 "zoned": false, 00:08:16.761 "supported_io_types": { 00:08:16.761 "read": true, 00:08:16.761 "write": true, 00:08:16.761 "unmap": true, 00:08:16.761 "flush": true, 00:08:16.761 "reset": true, 00:08:16.761 "nvme_admin": false, 00:08:16.761 "nvme_io": false, 00:08:16.761 "nvme_io_md": false, 00:08:16.761 "write_zeroes": true, 00:08:16.761 "zcopy": true, 00:08:16.761 "get_zone_info": false, 00:08:16.761 "zone_management": false, 00:08:16.761 "zone_append": false, 00:08:16.761 "compare": false, 00:08:16.761 "compare_and_write": false, 00:08:16.761 "abort": true, 00:08:16.761 "seek_hole": false, 00:08:16.761 "seek_data": false, 00:08:16.761 "copy": true, 00:08:16.761 "nvme_iov_md": false 00:08:16.761 }, 00:08:16.761 "memory_domains": [ 00:08:16.761 { 00:08:16.761 "dma_device_id": "system", 00:08:16.761 "dma_device_type": 1 00:08:16.761 }, 00:08:16.761 { 00:08:16.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.761 "dma_device_type": 2 00:08:16.761 } 00:08:16.761 ], 00:08:16.761 "driver_specific": {} 00:08:16.761 } 00:08:16.761 ] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.762 BaseBdev3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.762 [ 00:08:16.762 { 00:08:16.762 "name": "BaseBdev3", 00:08:16.762 "aliases": [ 00:08:16.762 "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482" 00:08:16.762 ], 00:08:16.762 "product_name": "Malloc disk", 00:08:16.762 "block_size": 512, 00:08:16.762 "num_blocks": 65536, 00:08:16.762 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:16.762 "assigned_rate_limits": { 00:08:16.762 "rw_ios_per_sec": 0, 00:08:16.762 "rw_mbytes_per_sec": 0, 00:08:16.762 "r_mbytes_per_sec": 0, 00:08:16.762 "w_mbytes_per_sec": 0 00:08:16.762 }, 00:08:16.762 "claimed": false, 00:08:16.762 "zoned": false, 00:08:16.762 "supported_io_types": { 00:08:16.762 "read": true, 00:08:16.762 "write": true, 00:08:16.762 "unmap": true, 00:08:16.762 "flush": true, 00:08:16.762 "reset": true, 00:08:16.762 "nvme_admin": false, 00:08:16.762 "nvme_io": false, 00:08:16.762 "nvme_io_md": false, 00:08:16.762 "write_zeroes": true, 00:08:16.762 "zcopy": true, 00:08:16.762 "get_zone_info": false, 00:08:16.762 "zone_management": false, 00:08:16.762 "zone_append": false, 00:08:16.762 "compare": false, 00:08:16.762 "compare_and_write": false, 00:08:16.762 "abort": true, 00:08:16.762 "seek_hole": false, 00:08:16.762 "seek_data": false, 00:08:16.762 "copy": true, 00:08:16.762 "nvme_iov_md": false 00:08:16.762 }, 00:08:16.762 "memory_domains": [ 00:08:16.762 { 00:08:16.762 "dma_device_id": "system", 00:08:16.762 "dma_device_type": 1 00:08:16.762 }, 00:08:16.762 { 00:08:16.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.762 "dma_device_type": 2 00:08:16.762 } 00:08:16.762 ], 00:08:16.762 "driver_specific": {} 00:08:16.762 } 00:08:16.762 ] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.762 [2024-11-26 22:52:55.837586] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.762 [2024-11-26 22:52:55.837680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.762 [2024-11-26 22:52:55.837733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.762 [2024-11-26 22:52:55.839530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.762 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.021 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.021 "name": "Existed_Raid", 00:08:17.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.021 "strip_size_kb": 64, 00:08:17.021 "state": "configuring", 00:08:17.021 "raid_level": "raid0", 00:08:17.021 "superblock": false, 00:08:17.021 "num_base_bdevs": 3, 00:08:17.021 "num_base_bdevs_discovered": 2, 00:08:17.021 "num_base_bdevs_operational": 3, 00:08:17.021 "base_bdevs_list": [ 00:08:17.021 { 00:08:17.021 "name": "BaseBdev1", 00:08:17.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.021 "is_configured": false, 00:08:17.021 "data_offset": 0, 00:08:17.021 "data_size": 0 00:08:17.021 }, 00:08:17.021 { 00:08:17.021 "name": "BaseBdev2", 00:08:17.021 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:17.021 "is_configured": true, 00:08:17.021 "data_offset": 0, 00:08:17.021 "data_size": 65536 00:08:17.021 }, 00:08:17.021 { 00:08:17.021 "name": "BaseBdev3", 00:08:17.021 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:17.021 "is_configured": true, 00:08:17.021 "data_offset": 0, 00:08:17.021 "data_size": 65536 00:08:17.021 } 00:08:17.021 ] 00:08:17.021 }' 00:08:17.021 22:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.021 22:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.279 [2024-11-26 22:52:56.293714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.279 "name": "Existed_Raid", 00:08:17.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.279 "strip_size_kb": 64, 00:08:17.279 "state": "configuring", 00:08:17.279 "raid_level": "raid0", 00:08:17.279 "superblock": false, 00:08:17.279 "num_base_bdevs": 3, 00:08:17.279 "num_base_bdevs_discovered": 1, 00:08:17.279 "num_base_bdevs_operational": 3, 00:08:17.279 "base_bdevs_list": [ 00:08:17.279 { 00:08:17.279 "name": "BaseBdev1", 00:08:17.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.279 "is_configured": false, 00:08:17.279 "data_offset": 0, 00:08:17.279 "data_size": 0 00:08:17.279 }, 00:08:17.279 { 00:08:17.279 "name": null, 00:08:17.279 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:17.279 "is_configured": false, 00:08:17.279 "data_offset": 0, 00:08:17.279 "data_size": 65536 00:08:17.279 }, 00:08:17.279 { 00:08:17.279 "name": "BaseBdev3", 00:08:17.279 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:17.279 "is_configured": true, 00:08:17.279 "data_offset": 0, 00:08:17.279 "data_size": 65536 00:08:17.279 } 00:08:17.279 ] 00:08:17.279 }' 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.279 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.846 [2024-11-26 22:52:56.800618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.846 BaseBdev1 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.846 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.846 [ 00:08:17.846 { 00:08:17.846 "name": "BaseBdev1", 00:08:17.846 "aliases": [ 00:08:17.846 "746cc170-eb3d-4221-888c-5e4066bd8499" 00:08:17.846 ], 00:08:17.846 "product_name": "Malloc disk", 00:08:17.846 "block_size": 512, 00:08:17.846 "num_blocks": 65536, 00:08:17.846 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:17.846 "assigned_rate_limits": { 00:08:17.846 "rw_ios_per_sec": 0, 00:08:17.846 "rw_mbytes_per_sec": 0, 00:08:17.846 "r_mbytes_per_sec": 0, 00:08:17.846 "w_mbytes_per_sec": 0 00:08:17.846 }, 00:08:17.847 "claimed": true, 00:08:17.847 "claim_type": "exclusive_write", 00:08:17.847 "zoned": false, 00:08:17.847 "supported_io_types": { 00:08:17.847 "read": true, 00:08:17.847 "write": true, 00:08:17.847 "unmap": true, 00:08:17.847 "flush": true, 00:08:17.847 "reset": true, 00:08:17.847 "nvme_admin": false, 00:08:17.847 "nvme_io": false, 00:08:17.847 "nvme_io_md": false, 00:08:17.847 "write_zeroes": true, 00:08:17.847 "zcopy": true, 00:08:17.847 "get_zone_info": false, 00:08:17.847 "zone_management": false, 00:08:17.847 "zone_append": false, 00:08:17.847 "compare": false, 00:08:17.847 "compare_and_write": false, 00:08:17.847 "abort": true, 00:08:17.847 "seek_hole": false, 00:08:17.847 "seek_data": false, 00:08:17.847 "copy": true, 00:08:17.847 "nvme_iov_md": false 00:08:17.847 }, 00:08:17.847 "memory_domains": [ 00:08:17.847 { 00:08:17.847 "dma_device_id": "system", 00:08:17.847 "dma_device_type": 1 00:08:17.847 }, 00:08:17.847 { 00:08:17.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.847 "dma_device_type": 2 00:08:17.847 } 00:08:17.847 ], 00:08:17.847 "driver_specific": {} 00:08:17.847 } 00:08:17.847 ] 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.847 "name": "Existed_Raid", 00:08:17.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.847 "strip_size_kb": 64, 00:08:17.847 "state": "configuring", 00:08:17.847 "raid_level": "raid0", 00:08:17.847 "superblock": false, 00:08:17.847 "num_base_bdevs": 3, 00:08:17.847 "num_base_bdevs_discovered": 2, 00:08:17.847 "num_base_bdevs_operational": 3, 00:08:17.847 "base_bdevs_list": [ 00:08:17.847 { 00:08:17.847 "name": "BaseBdev1", 00:08:17.847 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:17.847 "is_configured": true, 00:08:17.847 "data_offset": 0, 00:08:17.847 "data_size": 65536 00:08:17.847 }, 00:08:17.847 { 00:08:17.847 "name": null, 00:08:17.847 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:17.847 "is_configured": false, 00:08:17.847 "data_offset": 0, 00:08:17.847 "data_size": 65536 00:08:17.847 }, 00:08:17.847 { 00:08:17.847 "name": "BaseBdev3", 00:08:17.847 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:17.847 "is_configured": true, 00:08:17.847 "data_offset": 0, 00:08:17.847 "data_size": 65536 00:08:17.847 } 00:08:17.847 ] 00:08:17.847 }' 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.847 22:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 [2024-11-26 22:52:57.304794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.414 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.415 "name": "Existed_Raid", 00:08:18.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.415 "strip_size_kb": 64, 00:08:18.415 "state": "configuring", 00:08:18.415 "raid_level": "raid0", 00:08:18.415 "superblock": false, 00:08:18.415 "num_base_bdevs": 3, 00:08:18.415 "num_base_bdevs_discovered": 1, 00:08:18.415 "num_base_bdevs_operational": 3, 00:08:18.415 "base_bdevs_list": [ 00:08:18.415 { 00:08:18.415 "name": "BaseBdev1", 00:08:18.415 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:18.415 "is_configured": true, 00:08:18.415 "data_offset": 0, 00:08:18.415 "data_size": 65536 00:08:18.415 }, 00:08:18.415 { 00:08:18.415 "name": null, 00:08:18.415 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:18.415 "is_configured": false, 00:08:18.415 "data_offset": 0, 00:08:18.415 "data_size": 65536 00:08:18.415 }, 00:08:18.415 { 00:08:18.415 "name": null, 00:08:18.415 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:18.415 "is_configured": false, 00:08:18.415 "data_offset": 0, 00:08:18.415 "data_size": 65536 00:08:18.415 } 00:08:18.415 ] 00:08:18.415 }' 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.415 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.675 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.675 [2024-11-26 22:52:57.800980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.935 "name": "Existed_Raid", 00:08:18.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.935 "strip_size_kb": 64, 00:08:18.935 "state": "configuring", 00:08:18.935 "raid_level": "raid0", 00:08:18.935 "superblock": false, 00:08:18.935 "num_base_bdevs": 3, 00:08:18.935 "num_base_bdevs_discovered": 2, 00:08:18.935 "num_base_bdevs_operational": 3, 00:08:18.935 "base_bdevs_list": [ 00:08:18.935 { 00:08:18.935 "name": "BaseBdev1", 00:08:18.935 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:18.935 "is_configured": true, 00:08:18.935 "data_offset": 0, 00:08:18.935 "data_size": 65536 00:08:18.935 }, 00:08:18.935 { 00:08:18.935 "name": null, 00:08:18.935 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:18.935 "is_configured": false, 00:08:18.935 "data_offset": 0, 00:08:18.935 "data_size": 65536 00:08:18.935 }, 00:08:18.935 { 00:08:18.935 "name": "BaseBdev3", 00:08:18.935 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:18.935 "is_configured": true, 00:08:18.935 "data_offset": 0, 00:08:18.935 "data_size": 65536 00:08:18.935 } 00:08:18.935 ] 00:08:18.935 }' 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.935 22:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 [2024-11-26 22:52:58.289110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.455 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.455 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.455 "name": "Existed_Raid", 00:08:19.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.455 "strip_size_kb": 64, 00:08:19.455 "state": "configuring", 00:08:19.455 "raid_level": "raid0", 00:08:19.455 "superblock": false, 00:08:19.455 "num_base_bdevs": 3, 00:08:19.455 "num_base_bdevs_discovered": 1, 00:08:19.455 "num_base_bdevs_operational": 3, 00:08:19.455 "base_bdevs_list": [ 00:08:19.455 { 00:08:19.455 "name": null, 00:08:19.455 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:19.455 "is_configured": false, 00:08:19.455 "data_offset": 0, 00:08:19.455 "data_size": 65536 00:08:19.455 }, 00:08:19.455 { 00:08:19.455 "name": null, 00:08:19.455 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:19.455 "is_configured": false, 00:08:19.455 "data_offset": 0, 00:08:19.455 "data_size": 65536 00:08:19.455 }, 00:08:19.455 { 00:08:19.455 "name": "BaseBdev3", 00:08:19.455 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:19.455 "is_configured": true, 00:08:19.455 "data_offset": 0, 00:08:19.455 "data_size": 65536 00:08:19.455 } 00:08:19.455 ] 00:08:19.455 }' 00:08:19.455 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.455 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.716 [2024-11-26 22:52:58.735602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.716 "name": "Existed_Raid", 00:08:19.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.716 "strip_size_kb": 64, 00:08:19.716 "state": "configuring", 00:08:19.716 "raid_level": "raid0", 00:08:19.716 "superblock": false, 00:08:19.716 "num_base_bdevs": 3, 00:08:19.716 "num_base_bdevs_discovered": 2, 00:08:19.716 "num_base_bdevs_operational": 3, 00:08:19.716 "base_bdevs_list": [ 00:08:19.716 { 00:08:19.716 "name": null, 00:08:19.716 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:19.716 "is_configured": false, 00:08:19.716 "data_offset": 0, 00:08:19.716 "data_size": 65536 00:08:19.716 }, 00:08:19.716 { 00:08:19.716 "name": "BaseBdev2", 00:08:19.716 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:19.716 "is_configured": true, 00:08:19.716 "data_offset": 0, 00:08:19.716 "data_size": 65536 00:08:19.716 }, 00:08:19.716 { 00:08:19.716 "name": "BaseBdev3", 00:08:19.716 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:19.716 "is_configured": true, 00:08:19.716 "data_offset": 0, 00:08:19.716 "data_size": 65536 00:08:19.716 } 00:08:19.716 ] 00:08:19.716 }' 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.716 22:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 746cc170-eb3d-4221-888c-5e4066bd8499 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 [2024-11-26 22:52:59.286490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:20.291 [2024-11-26 22:52:59.286584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.291 [2024-11-26 22:52:59.286607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:20.291 [2024-11-26 22:52:59.286868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:20.291 [2024-11-26 22:52:59.287012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.291 [2024-11-26 22:52:59.287057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:20.291 [2024-11-26 22:52:59.287279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.291 NewBaseBdev 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 [ 00:08:20.291 { 00:08:20.291 "name": "NewBaseBdev", 00:08:20.291 "aliases": [ 00:08:20.291 "746cc170-eb3d-4221-888c-5e4066bd8499" 00:08:20.291 ], 00:08:20.291 "product_name": "Malloc disk", 00:08:20.291 "block_size": 512, 00:08:20.291 "num_blocks": 65536, 00:08:20.291 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:20.291 "assigned_rate_limits": { 00:08:20.291 "rw_ios_per_sec": 0, 00:08:20.291 "rw_mbytes_per_sec": 0, 00:08:20.291 "r_mbytes_per_sec": 0, 00:08:20.291 "w_mbytes_per_sec": 0 00:08:20.291 }, 00:08:20.291 "claimed": true, 00:08:20.291 "claim_type": "exclusive_write", 00:08:20.291 "zoned": false, 00:08:20.291 "supported_io_types": { 00:08:20.291 "read": true, 00:08:20.291 "write": true, 00:08:20.291 "unmap": true, 00:08:20.291 "flush": true, 00:08:20.291 "reset": true, 00:08:20.291 "nvme_admin": false, 00:08:20.291 "nvme_io": false, 00:08:20.291 "nvme_io_md": false, 00:08:20.291 "write_zeroes": true, 00:08:20.291 "zcopy": true, 00:08:20.291 "get_zone_info": false, 00:08:20.291 "zone_management": false, 00:08:20.291 "zone_append": false, 00:08:20.291 "compare": false, 00:08:20.291 "compare_and_write": false, 00:08:20.291 "abort": true, 00:08:20.291 "seek_hole": false, 00:08:20.291 "seek_data": false, 00:08:20.291 "copy": true, 00:08:20.291 "nvme_iov_md": false 00:08:20.291 }, 00:08:20.291 "memory_domains": [ 00:08:20.291 { 00:08:20.291 "dma_device_id": "system", 00:08:20.291 "dma_device_type": 1 00:08:20.291 }, 00:08:20.291 { 00:08:20.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.291 "dma_device_type": 2 00:08:20.291 } 00:08:20.291 ], 00:08:20.291 "driver_specific": {} 00:08:20.291 } 00:08:20.291 ] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.291 "name": "Existed_Raid", 00:08:20.291 "uuid": "97bef607-29c2-412b-b346-411a9ce73529", 00:08:20.291 "strip_size_kb": 64, 00:08:20.291 "state": "online", 00:08:20.291 "raid_level": "raid0", 00:08:20.291 "superblock": false, 00:08:20.291 "num_base_bdevs": 3, 00:08:20.291 "num_base_bdevs_discovered": 3, 00:08:20.291 "num_base_bdevs_operational": 3, 00:08:20.291 "base_bdevs_list": [ 00:08:20.291 { 00:08:20.291 "name": "NewBaseBdev", 00:08:20.291 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:20.291 "is_configured": true, 00:08:20.291 "data_offset": 0, 00:08:20.291 "data_size": 65536 00:08:20.291 }, 00:08:20.291 { 00:08:20.291 "name": "BaseBdev2", 00:08:20.291 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:20.291 "is_configured": true, 00:08:20.291 "data_offset": 0, 00:08:20.291 "data_size": 65536 00:08:20.291 }, 00:08:20.291 { 00:08:20.291 "name": "BaseBdev3", 00:08:20.291 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:20.291 "is_configured": true, 00:08:20.291 "data_offset": 0, 00:08:20.291 "data_size": 65536 00:08:20.291 } 00:08:20.291 ] 00:08:20.291 }' 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.291 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.862 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.862 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.862 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.862 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.862 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.863 [2024-11-26 22:52:59.754967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.863 "name": "Existed_Raid", 00:08:20.863 "aliases": [ 00:08:20.863 "97bef607-29c2-412b-b346-411a9ce73529" 00:08:20.863 ], 00:08:20.863 "product_name": "Raid Volume", 00:08:20.863 "block_size": 512, 00:08:20.863 "num_blocks": 196608, 00:08:20.863 "uuid": "97bef607-29c2-412b-b346-411a9ce73529", 00:08:20.863 "assigned_rate_limits": { 00:08:20.863 "rw_ios_per_sec": 0, 00:08:20.863 "rw_mbytes_per_sec": 0, 00:08:20.863 "r_mbytes_per_sec": 0, 00:08:20.863 "w_mbytes_per_sec": 0 00:08:20.863 }, 00:08:20.863 "claimed": false, 00:08:20.863 "zoned": false, 00:08:20.863 "supported_io_types": { 00:08:20.863 "read": true, 00:08:20.863 "write": true, 00:08:20.863 "unmap": true, 00:08:20.863 "flush": true, 00:08:20.863 "reset": true, 00:08:20.863 "nvme_admin": false, 00:08:20.863 "nvme_io": false, 00:08:20.863 "nvme_io_md": false, 00:08:20.863 "write_zeroes": true, 00:08:20.863 "zcopy": false, 00:08:20.863 "get_zone_info": false, 00:08:20.863 "zone_management": false, 00:08:20.863 "zone_append": false, 00:08:20.863 "compare": false, 00:08:20.863 "compare_and_write": false, 00:08:20.863 "abort": false, 00:08:20.863 "seek_hole": false, 00:08:20.863 "seek_data": false, 00:08:20.863 "copy": false, 00:08:20.863 "nvme_iov_md": false 00:08:20.863 }, 00:08:20.863 "memory_domains": [ 00:08:20.863 { 00:08:20.863 "dma_device_id": "system", 00:08:20.863 "dma_device_type": 1 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.863 "dma_device_type": 2 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "dma_device_id": "system", 00:08:20.863 "dma_device_type": 1 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.863 "dma_device_type": 2 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "dma_device_id": "system", 00:08:20.863 "dma_device_type": 1 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.863 "dma_device_type": 2 00:08:20.863 } 00:08:20.863 ], 00:08:20.863 "driver_specific": { 00:08:20.863 "raid": { 00:08:20.863 "uuid": "97bef607-29c2-412b-b346-411a9ce73529", 00:08:20.863 "strip_size_kb": 64, 00:08:20.863 "state": "online", 00:08:20.863 "raid_level": "raid0", 00:08:20.863 "superblock": false, 00:08:20.863 "num_base_bdevs": 3, 00:08:20.863 "num_base_bdevs_discovered": 3, 00:08:20.863 "num_base_bdevs_operational": 3, 00:08:20.863 "base_bdevs_list": [ 00:08:20.863 { 00:08:20.863 "name": "NewBaseBdev", 00:08:20.863 "uuid": "746cc170-eb3d-4221-888c-5e4066bd8499", 00:08:20.863 "is_configured": true, 00:08:20.863 "data_offset": 0, 00:08:20.863 "data_size": 65536 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "name": "BaseBdev2", 00:08:20.863 "uuid": "8f5b6b0e-29bd-49ed-acf7-7da0f8c4e07a", 00:08:20.863 "is_configured": true, 00:08:20.863 "data_offset": 0, 00:08:20.863 "data_size": 65536 00:08:20.863 }, 00:08:20.863 { 00:08:20.863 "name": "BaseBdev3", 00:08:20.863 "uuid": "824fe2ec-2ed3-4b8e-9ace-f9f1b768b482", 00:08:20.863 "is_configured": true, 00:08:20.863 "data_offset": 0, 00:08:20.863 "data_size": 65536 00:08:20.863 } 00:08:20.863 ] 00:08:20.863 } 00:08:20.863 } 00:08:20.863 }' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:20.863 BaseBdev2 00:08:20.863 BaseBdev3' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.863 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.124 22:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.124 [2024-11-26 22:53:00.026719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.124 [2024-11-26 22:53:00.026789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.124 [2024-11-26 22:53:00.026877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.124 [2024-11-26 22:53:00.026956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.124 [2024-11-26 22:53:00.026998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76649 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76649 ']' 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76649 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76649 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.124 killing process with pid 76649 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76649' 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76649 00:08:21.124 [2024-11-26 22:53:00.069696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.124 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76649 00:08:21.124 [2024-11-26 22:53:00.100127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.384 ************************************ 00:08:21.384 END TEST raid_state_function_test 00:08:21.384 ************************************ 00:08:21.384 00:08:21.384 real 0m8.848s 00:08:21.384 user 0m15.109s 00:08:21.384 sys 0m1.787s 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.384 22:53:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:21.384 22:53:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.384 22:53:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.384 22:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.384 ************************************ 00:08:21.384 START TEST raid_state_function_test_sb 00:08:21.384 ************************************ 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.384 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77254 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.385 Process raid pid: 77254 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77254' 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77254 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77254 ']' 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.385 22:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.385 [2024-11-26 22:53:00.504245] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:21.385 [2024-11-26 22:53:00.504406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.645 [2024-11-26 22:53:00.647098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.645 [2024-11-26 22:53:00.685894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.645 [2024-11-26 22:53:00.711451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.645 [2024-11-26 22:53:00.752841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.645 [2024-11-26 22:53:00.752964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.214 [2024-11-26 22:53:01.333011] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.214 [2024-11-26 22:53:01.333064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.214 [2024-11-26 22:53:01.333076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.214 [2024-11-26 22:53:01.333084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.214 [2024-11-26 22:53:01.333096] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.214 [2024-11-26 22:53:01.333103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.214 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.215 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.215 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.215 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.475 "name": "Existed_Raid", 00:08:22.475 "uuid": "dfa63fa3-1bba-41c4-b217-7fb4438772c7", 00:08:22.475 "strip_size_kb": 64, 00:08:22.475 "state": "configuring", 00:08:22.475 "raid_level": "raid0", 00:08:22.475 "superblock": true, 00:08:22.475 "num_base_bdevs": 3, 00:08:22.475 "num_base_bdevs_discovered": 0, 00:08:22.475 "num_base_bdevs_operational": 3, 00:08:22.475 "base_bdevs_list": [ 00:08:22.475 { 00:08:22.475 "name": "BaseBdev1", 00:08:22.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.475 "is_configured": false, 00:08:22.475 "data_offset": 0, 00:08:22.475 "data_size": 0 00:08:22.475 }, 00:08:22.475 { 00:08:22.475 "name": "BaseBdev2", 00:08:22.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.475 "is_configured": false, 00:08:22.475 "data_offset": 0, 00:08:22.475 "data_size": 0 00:08:22.475 }, 00:08:22.475 { 00:08:22.475 "name": "BaseBdev3", 00:08:22.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.475 "is_configured": false, 00:08:22.475 "data_offset": 0, 00:08:22.475 "data_size": 0 00:08:22.475 } 00:08:22.475 ] 00:08:22.475 }' 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.475 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 [2024-11-26 22:53:01.781024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.735 [2024-11-26 22:53:01.781123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 [2024-11-26 22:53:01.793062] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.735 [2024-11-26 22:53:01.793136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.735 [2024-11-26 22:53:01.793166] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.735 [2024-11-26 22:53:01.793186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.735 [2024-11-26 22:53:01.793205] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.735 [2024-11-26 22:53:01.793223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 [2024-11-26 22:53:01.813714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.735 BaseBdev1 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.735 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.735 [ 00:08:22.735 { 00:08:22.735 "name": "BaseBdev1", 00:08:22.735 "aliases": [ 00:08:22.736 "b2f38b8d-2174-44a2-9636-8d88df1c873c" 00:08:22.736 ], 00:08:22.736 "product_name": "Malloc disk", 00:08:22.736 "block_size": 512, 00:08:22.736 "num_blocks": 65536, 00:08:22.736 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:22.736 "assigned_rate_limits": { 00:08:22.736 "rw_ios_per_sec": 0, 00:08:22.736 "rw_mbytes_per_sec": 0, 00:08:22.736 "r_mbytes_per_sec": 0, 00:08:22.736 "w_mbytes_per_sec": 0 00:08:22.736 }, 00:08:22.736 "claimed": true, 00:08:22.736 "claim_type": "exclusive_write", 00:08:22.736 "zoned": false, 00:08:22.736 "supported_io_types": { 00:08:22.736 "read": true, 00:08:22.736 "write": true, 00:08:22.736 "unmap": true, 00:08:22.736 "flush": true, 00:08:22.736 "reset": true, 00:08:22.736 "nvme_admin": false, 00:08:22.736 "nvme_io": false, 00:08:22.736 "nvme_io_md": false, 00:08:22.736 "write_zeroes": true, 00:08:22.736 "zcopy": true, 00:08:22.736 "get_zone_info": false, 00:08:22.736 "zone_management": false, 00:08:22.736 "zone_append": false, 00:08:22.736 "compare": false, 00:08:22.736 "compare_and_write": false, 00:08:22.736 "abort": true, 00:08:22.736 "seek_hole": false, 00:08:22.736 "seek_data": false, 00:08:22.736 "copy": true, 00:08:22.736 "nvme_iov_md": false 00:08:22.736 }, 00:08:22.736 "memory_domains": [ 00:08:22.736 { 00:08:22.736 "dma_device_id": "system", 00:08:22.736 "dma_device_type": 1 00:08:22.736 }, 00:08:22.736 { 00:08:22.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.736 "dma_device_type": 2 00:08:22.736 } 00:08:22.736 ], 00:08:22.736 "driver_specific": {} 00:08:22.736 } 00:08:22.736 ] 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.736 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.996 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.996 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.996 "name": "Existed_Raid", 00:08:22.996 "uuid": "8c74c269-d75e-424c-8d5c-a775a1cdb840", 00:08:22.996 "strip_size_kb": 64, 00:08:22.996 "state": "configuring", 00:08:22.996 "raid_level": "raid0", 00:08:22.996 "superblock": true, 00:08:22.996 "num_base_bdevs": 3, 00:08:22.996 "num_base_bdevs_discovered": 1, 00:08:22.996 "num_base_bdevs_operational": 3, 00:08:22.996 "base_bdevs_list": [ 00:08:22.996 { 00:08:22.996 "name": "BaseBdev1", 00:08:22.996 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:22.996 "is_configured": true, 00:08:22.996 "data_offset": 2048, 00:08:22.996 "data_size": 63488 00:08:22.996 }, 00:08:22.996 { 00:08:22.996 "name": "BaseBdev2", 00:08:22.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.996 "is_configured": false, 00:08:22.996 "data_offset": 0, 00:08:22.996 "data_size": 0 00:08:22.996 }, 00:08:22.996 { 00:08:22.996 "name": "BaseBdev3", 00:08:22.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.996 "is_configured": false, 00:08:22.996 "data_offset": 0, 00:08:22.996 "data_size": 0 00:08:22.996 } 00:08:22.996 ] 00:08:22.996 }' 00:08:22.996 22:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.996 22:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 [2024-11-26 22:53:02.269873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.256 [2024-11-26 22:53:02.269972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 [2024-11-26 22:53:02.281909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.256 [2024-11-26 22:53:02.283782] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.256 [2024-11-26 22:53:02.283869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.256 [2024-11-26 22:53:02.283886] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.256 [2024-11-26 22:53:02.283893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.256 "name": "Existed_Raid", 00:08:23.256 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:23.256 "strip_size_kb": 64, 00:08:23.256 "state": "configuring", 00:08:23.256 "raid_level": "raid0", 00:08:23.256 "superblock": true, 00:08:23.256 "num_base_bdevs": 3, 00:08:23.256 "num_base_bdevs_discovered": 1, 00:08:23.256 "num_base_bdevs_operational": 3, 00:08:23.256 "base_bdevs_list": [ 00:08:23.256 { 00:08:23.256 "name": "BaseBdev1", 00:08:23.256 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:23.256 "is_configured": true, 00:08:23.256 "data_offset": 2048, 00:08:23.256 "data_size": 63488 00:08:23.256 }, 00:08:23.256 { 00:08:23.256 "name": "BaseBdev2", 00:08:23.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.256 "is_configured": false, 00:08:23.256 "data_offset": 0, 00:08:23.256 "data_size": 0 00:08:23.256 }, 00:08:23.256 { 00:08:23.256 "name": "BaseBdev3", 00:08:23.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.256 "is_configured": false, 00:08:23.256 "data_offset": 0, 00:08:23.256 "data_size": 0 00:08:23.256 } 00:08:23.256 ] 00:08:23.256 }' 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.256 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.827 [2024-11-26 22:53:02.712807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.827 BaseBdev2 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.827 [ 00:08:23.827 { 00:08:23.827 "name": "BaseBdev2", 00:08:23.827 "aliases": [ 00:08:23.827 "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696" 00:08:23.827 ], 00:08:23.827 "product_name": "Malloc disk", 00:08:23.827 "block_size": 512, 00:08:23.827 "num_blocks": 65536, 00:08:23.827 "uuid": "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696", 00:08:23.827 "assigned_rate_limits": { 00:08:23.827 "rw_ios_per_sec": 0, 00:08:23.827 "rw_mbytes_per_sec": 0, 00:08:23.827 "r_mbytes_per_sec": 0, 00:08:23.827 "w_mbytes_per_sec": 0 00:08:23.827 }, 00:08:23.827 "claimed": true, 00:08:23.827 "claim_type": "exclusive_write", 00:08:23.827 "zoned": false, 00:08:23.827 "supported_io_types": { 00:08:23.827 "read": true, 00:08:23.827 "write": true, 00:08:23.827 "unmap": true, 00:08:23.827 "flush": true, 00:08:23.827 "reset": true, 00:08:23.827 "nvme_admin": false, 00:08:23.827 "nvme_io": false, 00:08:23.827 "nvme_io_md": false, 00:08:23.827 "write_zeroes": true, 00:08:23.827 "zcopy": true, 00:08:23.827 "get_zone_info": false, 00:08:23.827 "zone_management": false, 00:08:23.827 "zone_append": false, 00:08:23.827 "compare": false, 00:08:23.827 "compare_and_write": false, 00:08:23.827 "abort": true, 00:08:23.827 "seek_hole": false, 00:08:23.827 "seek_data": false, 00:08:23.827 "copy": true, 00:08:23.827 "nvme_iov_md": false 00:08:23.827 }, 00:08:23.827 "memory_domains": [ 00:08:23.827 { 00:08:23.827 "dma_device_id": "system", 00:08:23.827 "dma_device_type": 1 00:08:23.827 }, 00:08:23.827 { 00:08:23.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.827 "dma_device_type": 2 00:08:23.827 } 00:08:23.827 ], 00:08:23.827 "driver_specific": {} 00:08:23.827 } 00:08:23.827 ] 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.827 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.828 "name": "Existed_Raid", 00:08:23.828 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:23.828 "strip_size_kb": 64, 00:08:23.828 "state": "configuring", 00:08:23.828 "raid_level": "raid0", 00:08:23.828 "superblock": true, 00:08:23.828 "num_base_bdevs": 3, 00:08:23.828 "num_base_bdevs_discovered": 2, 00:08:23.828 "num_base_bdevs_operational": 3, 00:08:23.828 "base_bdevs_list": [ 00:08:23.828 { 00:08:23.828 "name": "BaseBdev1", 00:08:23.828 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:23.828 "is_configured": true, 00:08:23.828 "data_offset": 2048, 00:08:23.828 "data_size": 63488 00:08:23.828 }, 00:08:23.828 { 00:08:23.828 "name": "BaseBdev2", 00:08:23.828 "uuid": "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696", 00:08:23.828 "is_configured": true, 00:08:23.828 "data_offset": 2048, 00:08:23.828 "data_size": 63488 00:08:23.828 }, 00:08:23.828 { 00:08:23.828 "name": "BaseBdev3", 00:08:23.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.828 "is_configured": false, 00:08:23.828 "data_offset": 0, 00:08:23.828 "data_size": 0 00:08:23.828 } 00:08:23.828 ] 00:08:23.828 }' 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.828 22:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.087 [2024-11-26 22:53:03.124668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.087 [2024-11-26 22:53:03.124850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:24.087 [2024-11-26 22:53:03.124865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.087 BaseBdev3 00:08:24.087 [2024-11-26 22:53:03.125133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:24.087 [2024-11-26 22:53:03.125284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:24.087 [2024-11-26 22:53:03.125298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:24.087 [2024-11-26 22:53:03.125417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.087 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.088 [ 00:08:24.088 { 00:08:24.088 "name": "BaseBdev3", 00:08:24.088 "aliases": [ 00:08:24.088 "d7285f6d-bd92-4055-93f4-77a8988aa3b1" 00:08:24.088 ], 00:08:24.088 "product_name": "Malloc disk", 00:08:24.088 "block_size": 512, 00:08:24.088 "num_blocks": 65536, 00:08:24.088 "uuid": "d7285f6d-bd92-4055-93f4-77a8988aa3b1", 00:08:24.088 "assigned_rate_limits": { 00:08:24.088 "rw_ios_per_sec": 0, 00:08:24.088 "rw_mbytes_per_sec": 0, 00:08:24.088 "r_mbytes_per_sec": 0, 00:08:24.088 "w_mbytes_per_sec": 0 00:08:24.088 }, 00:08:24.088 "claimed": true, 00:08:24.088 "claim_type": "exclusive_write", 00:08:24.088 "zoned": false, 00:08:24.088 "supported_io_types": { 00:08:24.088 "read": true, 00:08:24.088 "write": true, 00:08:24.088 "unmap": true, 00:08:24.088 "flush": true, 00:08:24.088 "reset": true, 00:08:24.088 "nvme_admin": false, 00:08:24.088 "nvme_io": false, 00:08:24.088 "nvme_io_md": false, 00:08:24.088 "write_zeroes": true, 00:08:24.088 "zcopy": true, 00:08:24.088 "get_zone_info": false, 00:08:24.088 "zone_management": false, 00:08:24.088 "zone_append": false, 00:08:24.088 "compare": false, 00:08:24.088 "compare_and_write": false, 00:08:24.088 "abort": true, 00:08:24.088 "seek_hole": false, 00:08:24.088 "seek_data": false, 00:08:24.088 "copy": true, 00:08:24.088 "nvme_iov_md": false 00:08:24.088 }, 00:08:24.088 "memory_domains": [ 00:08:24.088 { 00:08:24.088 "dma_device_id": "system", 00:08:24.088 "dma_device_type": 1 00:08:24.088 }, 00:08:24.088 { 00:08:24.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.088 "dma_device_type": 2 00:08:24.088 } 00:08:24.088 ], 00:08:24.088 "driver_specific": {} 00:08:24.088 } 00:08:24.088 ] 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.088 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.348 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.348 "name": "Existed_Raid", 00:08:24.348 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:24.348 "strip_size_kb": 64, 00:08:24.348 "state": "online", 00:08:24.348 "raid_level": "raid0", 00:08:24.348 "superblock": true, 00:08:24.348 "num_base_bdevs": 3, 00:08:24.348 "num_base_bdevs_discovered": 3, 00:08:24.348 "num_base_bdevs_operational": 3, 00:08:24.348 "base_bdevs_list": [ 00:08:24.348 { 00:08:24.348 "name": "BaseBdev1", 00:08:24.348 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:24.348 "is_configured": true, 00:08:24.348 "data_offset": 2048, 00:08:24.348 "data_size": 63488 00:08:24.348 }, 00:08:24.348 { 00:08:24.348 "name": "BaseBdev2", 00:08:24.348 "uuid": "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696", 00:08:24.348 "is_configured": true, 00:08:24.348 "data_offset": 2048, 00:08:24.348 "data_size": 63488 00:08:24.348 }, 00:08:24.348 { 00:08:24.348 "name": "BaseBdev3", 00:08:24.348 "uuid": "d7285f6d-bd92-4055-93f4-77a8988aa3b1", 00:08:24.349 "is_configured": true, 00:08:24.349 "data_offset": 2048, 00:08:24.349 "data_size": 63488 00:08:24.349 } 00:08:24.349 ] 00:08:24.349 }' 00:08:24.349 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.349 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.608 [2024-11-26 22:53:03.561110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.608 "name": "Existed_Raid", 00:08:24.608 "aliases": [ 00:08:24.608 "300a067d-9afa-4a93-80b5-ad4e2faa292b" 00:08:24.608 ], 00:08:24.608 "product_name": "Raid Volume", 00:08:24.608 "block_size": 512, 00:08:24.608 "num_blocks": 190464, 00:08:24.608 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:24.608 "assigned_rate_limits": { 00:08:24.608 "rw_ios_per_sec": 0, 00:08:24.608 "rw_mbytes_per_sec": 0, 00:08:24.608 "r_mbytes_per_sec": 0, 00:08:24.608 "w_mbytes_per_sec": 0 00:08:24.608 }, 00:08:24.608 "claimed": false, 00:08:24.608 "zoned": false, 00:08:24.608 "supported_io_types": { 00:08:24.608 "read": true, 00:08:24.608 "write": true, 00:08:24.608 "unmap": true, 00:08:24.608 "flush": true, 00:08:24.608 "reset": true, 00:08:24.608 "nvme_admin": false, 00:08:24.608 "nvme_io": false, 00:08:24.608 "nvme_io_md": false, 00:08:24.608 "write_zeroes": true, 00:08:24.608 "zcopy": false, 00:08:24.608 "get_zone_info": false, 00:08:24.608 "zone_management": false, 00:08:24.608 "zone_append": false, 00:08:24.608 "compare": false, 00:08:24.608 "compare_and_write": false, 00:08:24.608 "abort": false, 00:08:24.608 "seek_hole": false, 00:08:24.608 "seek_data": false, 00:08:24.608 "copy": false, 00:08:24.608 "nvme_iov_md": false 00:08:24.608 }, 00:08:24.608 "memory_domains": [ 00:08:24.608 { 00:08:24.608 "dma_device_id": "system", 00:08:24.608 "dma_device_type": 1 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.608 "dma_device_type": 2 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "dma_device_id": "system", 00:08:24.608 "dma_device_type": 1 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.608 "dma_device_type": 2 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "dma_device_id": "system", 00:08:24.608 "dma_device_type": 1 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.608 "dma_device_type": 2 00:08:24.608 } 00:08:24.608 ], 00:08:24.608 "driver_specific": { 00:08:24.608 "raid": { 00:08:24.608 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:24.608 "strip_size_kb": 64, 00:08:24.608 "state": "online", 00:08:24.608 "raid_level": "raid0", 00:08:24.608 "superblock": true, 00:08:24.608 "num_base_bdevs": 3, 00:08:24.608 "num_base_bdevs_discovered": 3, 00:08:24.608 "num_base_bdevs_operational": 3, 00:08:24.608 "base_bdevs_list": [ 00:08:24.608 { 00:08:24.608 "name": "BaseBdev1", 00:08:24.608 "uuid": "b2f38b8d-2174-44a2-9636-8d88df1c873c", 00:08:24.608 "is_configured": true, 00:08:24.608 "data_offset": 2048, 00:08:24.608 "data_size": 63488 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "name": "BaseBdev2", 00:08:24.608 "uuid": "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696", 00:08:24.608 "is_configured": true, 00:08:24.608 "data_offset": 2048, 00:08:24.608 "data_size": 63488 00:08:24.608 }, 00:08:24.608 { 00:08:24.608 "name": "BaseBdev3", 00:08:24.608 "uuid": "d7285f6d-bd92-4055-93f4-77a8988aa3b1", 00:08:24.608 "is_configured": true, 00:08:24.608 "data_offset": 2048, 00:08:24.608 "data_size": 63488 00:08:24.608 } 00:08:24.608 ] 00:08:24.608 } 00:08:24.608 } 00:08:24.608 }' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.608 BaseBdev2 00:08:24.608 BaseBdev3' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.608 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.868 [2024-11-26 22:53:03.808927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.868 [2024-11-26 22:53:03.808998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.868 [2024-11-26 22:53:03.809074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.868 "name": "Existed_Raid", 00:08:24.868 "uuid": "300a067d-9afa-4a93-80b5-ad4e2faa292b", 00:08:24.868 "strip_size_kb": 64, 00:08:24.868 "state": "offline", 00:08:24.868 "raid_level": "raid0", 00:08:24.868 "superblock": true, 00:08:24.868 "num_base_bdevs": 3, 00:08:24.868 "num_base_bdevs_discovered": 2, 00:08:24.868 "num_base_bdevs_operational": 2, 00:08:24.868 "base_bdevs_list": [ 00:08:24.868 { 00:08:24.868 "name": null, 00:08:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.868 "is_configured": false, 00:08:24.868 "data_offset": 0, 00:08:24.868 "data_size": 63488 00:08:24.868 }, 00:08:24.868 { 00:08:24.868 "name": "BaseBdev2", 00:08:24.868 "uuid": "1c938ae4-cc6e-4eb9-a3ce-d84eb270a696", 00:08:24.868 "is_configured": true, 00:08:24.868 "data_offset": 2048, 00:08:24.868 "data_size": 63488 00:08:24.868 }, 00:08:24.868 { 00:08:24.868 "name": "BaseBdev3", 00:08:24.868 "uuid": "d7285f6d-bd92-4055-93f4-77a8988aa3b1", 00:08:24.868 "is_configured": true, 00:08:24.868 "data_offset": 2048, 00:08:24.868 "data_size": 63488 00:08:24.868 } 00:08:24.868 ] 00:08:24.868 }' 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.868 22:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 [2024-11-26 22:53:04.320219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 [2024-11-26 22:53:04.387121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.439 [2024-11-26 22:53:04.387220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.439 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.439 [ 00:08:25.439 { 00:08:25.439 "name": "BaseBdev2", 00:08:25.439 "aliases": [ 00:08:25.439 "e6fb8a15-2ad0-4b07-a975-1979208aadbf" 00:08:25.439 ], 00:08:25.439 "product_name": "Malloc disk", 00:08:25.439 "block_size": 512, 00:08:25.439 "num_blocks": 65536, 00:08:25.439 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:25.439 "assigned_rate_limits": { 00:08:25.439 "rw_ios_per_sec": 0, 00:08:25.439 "rw_mbytes_per_sec": 0, 00:08:25.439 "r_mbytes_per_sec": 0, 00:08:25.439 "w_mbytes_per_sec": 0 00:08:25.439 }, 00:08:25.439 "claimed": false, 00:08:25.439 "zoned": false, 00:08:25.439 "supported_io_types": { 00:08:25.439 "read": true, 00:08:25.439 "write": true, 00:08:25.439 "unmap": true, 00:08:25.439 "flush": true, 00:08:25.439 "reset": true, 00:08:25.439 "nvme_admin": false, 00:08:25.439 "nvme_io": false, 00:08:25.439 "nvme_io_md": false, 00:08:25.439 "write_zeroes": true, 00:08:25.439 "zcopy": true, 00:08:25.439 "get_zone_info": false, 00:08:25.439 "zone_management": false, 00:08:25.439 "zone_append": false, 00:08:25.439 "compare": false, 00:08:25.439 "compare_and_write": false, 00:08:25.439 "abort": true, 00:08:25.439 "seek_hole": false, 00:08:25.439 "seek_data": false, 00:08:25.439 "copy": true, 00:08:25.440 "nvme_iov_md": false 00:08:25.440 }, 00:08:25.440 "memory_domains": [ 00:08:25.440 { 00:08:25.440 "dma_device_id": "system", 00:08:25.440 "dma_device_type": 1 00:08:25.440 }, 00:08:25.440 { 00:08:25.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.440 "dma_device_type": 2 00:08:25.440 } 00:08:25.440 ], 00:08:25.440 "driver_specific": {} 00:08:25.440 } 00:08:25.440 ] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.440 BaseBdev3 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.440 [ 00:08:25.440 { 00:08:25.440 "name": "BaseBdev3", 00:08:25.440 "aliases": [ 00:08:25.440 "3a681564-21d9-4f7b-ab83-67a8afb138d0" 00:08:25.440 ], 00:08:25.440 "product_name": "Malloc disk", 00:08:25.440 "block_size": 512, 00:08:25.440 "num_blocks": 65536, 00:08:25.440 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:25.440 "assigned_rate_limits": { 00:08:25.440 "rw_ios_per_sec": 0, 00:08:25.440 "rw_mbytes_per_sec": 0, 00:08:25.440 "r_mbytes_per_sec": 0, 00:08:25.440 "w_mbytes_per_sec": 0 00:08:25.440 }, 00:08:25.440 "claimed": false, 00:08:25.440 "zoned": false, 00:08:25.440 "supported_io_types": { 00:08:25.440 "read": true, 00:08:25.440 "write": true, 00:08:25.440 "unmap": true, 00:08:25.440 "flush": true, 00:08:25.440 "reset": true, 00:08:25.440 "nvme_admin": false, 00:08:25.440 "nvme_io": false, 00:08:25.440 "nvme_io_md": false, 00:08:25.440 "write_zeroes": true, 00:08:25.440 "zcopy": true, 00:08:25.440 "get_zone_info": false, 00:08:25.440 "zone_management": false, 00:08:25.440 "zone_append": false, 00:08:25.440 "compare": false, 00:08:25.440 "compare_and_write": false, 00:08:25.440 "abort": true, 00:08:25.440 "seek_hole": false, 00:08:25.440 "seek_data": false, 00:08:25.440 "copy": true, 00:08:25.440 "nvme_iov_md": false 00:08:25.440 }, 00:08:25.440 "memory_domains": [ 00:08:25.440 { 00:08:25.440 "dma_device_id": "system", 00:08:25.440 "dma_device_type": 1 00:08:25.440 }, 00:08:25.440 { 00:08:25.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.440 "dma_device_type": 2 00:08:25.440 } 00:08:25.440 ], 00:08:25.440 "driver_specific": {} 00:08:25.440 } 00:08:25.440 ] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.440 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.700 [2024-11-26 22:53:04.565935] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.700 [2024-11-26 22:53:04.566023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.700 [2024-11-26 22:53:04.566064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.700 [2024-11-26 22:53:04.567968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.700 "name": "Existed_Raid", 00:08:25.700 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:25.700 "strip_size_kb": 64, 00:08:25.700 "state": "configuring", 00:08:25.700 "raid_level": "raid0", 00:08:25.700 "superblock": true, 00:08:25.700 "num_base_bdevs": 3, 00:08:25.700 "num_base_bdevs_discovered": 2, 00:08:25.700 "num_base_bdevs_operational": 3, 00:08:25.700 "base_bdevs_list": [ 00:08:25.700 { 00:08:25.700 "name": "BaseBdev1", 00:08:25.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.700 "is_configured": false, 00:08:25.700 "data_offset": 0, 00:08:25.700 "data_size": 0 00:08:25.700 }, 00:08:25.700 { 00:08:25.700 "name": "BaseBdev2", 00:08:25.700 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:25.700 "is_configured": true, 00:08:25.700 "data_offset": 2048, 00:08:25.700 "data_size": 63488 00:08:25.700 }, 00:08:25.700 { 00:08:25.700 "name": "BaseBdev3", 00:08:25.700 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:25.700 "is_configured": true, 00:08:25.700 "data_offset": 2048, 00:08:25.700 "data_size": 63488 00:08:25.700 } 00:08:25.700 ] 00:08:25.700 }' 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.700 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.960 [2024-11-26 22:53:04.962033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.960 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.960 "name": "Existed_Raid", 00:08:25.960 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:25.960 "strip_size_kb": 64, 00:08:25.960 "state": "configuring", 00:08:25.961 "raid_level": "raid0", 00:08:25.961 "superblock": true, 00:08:25.961 "num_base_bdevs": 3, 00:08:25.961 "num_base_bdevs_discovered": 1, 00:08:25.961 "num_base_bdevs_operational": 3, 00:08:25.961 "base_bdevs_list": [ 00:08:25.961 { 00:08:25.961 "name": "BaseBdev1", 00:08:25.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.961 "is_configured": false, 00:08:25.961 "data_offset": 0, 00:08:25.961 "data_size": 0 00:08:25.961 }, 00:08:25.961 { 00:08:25.961 "name": null, 00:08:25.961 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:25.961 "is_configured": false, 00:08:25.961 "data_offset": 0, 00:08:25.961 "data_size": 63488 00:08:25.961 }, 00:08:25.961 { 00:08:25.961 "name": "BaseBdev3", 00:08:25.961 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:25.961 "is_configured": true, 00:08:25.961 "data_offset": 2048, 00:08:25.961 "data_size": 63488 00:08:25.961 } 00:08:25.961 ] 00:08:25.961 }' 00:08:25.961 22:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.961 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.220 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.220 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.220 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.220 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.220 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.481 [2024-11-26 22:53:05.380938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.481 BaseBdev1 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.481 [ 00:08:26.481 { 00:08:26.481 "name": "BaseBdev1", 00:08:26.481 "aliases": [ 00:08:26.481 "313323d5-fbb2-4309-9baf-0fd231c6bf2f" 00:08:26.481 ], 00:08:26.481 "product_name": "Malloc disk", 00:08:26.481 "block_size": 512, 00:08:26.481 "num_blocks": 65536, 00:08:26.481 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:26.481 "assigned_rate_limits": { 00:08:26.481 "rw_ios_per_sec": 0, 00:08:26.481 "rw_mbytes_per_sec": 0, 00:08:26.481 "r_mbytes_per_sec": 0, 00:08:26.481 "w_mbytes_per_sec": 0 00:08:26.481 }, 00:08:26.481 "claimed": true, 00:08:26.481 "claim_type": "exclusive_write", 00:08:26.481 "zoned": false, 00:08:26.481 "supported_io_types": { 00:08:26.481 "read": true, 00:08:26.481 "write": true, 00:08:26.481 "unmap": true, 00:08:26.481 "flush": true, 00:08:26.481 "reset": true, 00:08:26.481 "nvme_admin": false, 00:08:26.481 "nvme_io": false, 00:08:26.481 "nvme_io_md": false, 00:08:26.481 "write_zeroes": true, 00:08:26.481 "zcopy": true, 00:08:26.481 "get_zone_info": false, 00:08:26.481 "zone_management": false, 00:08:26.481 "zone_append": false, 00:08:26.481 "compare": false, 00:08:26.481 "compare_and_write": false, 00:08:26.481 "abort": true, 00:08:26.481 "seek_hole": false, 00:08:26.481 "seek_data": false, 00:08:26.481 "copy": true, 00:08:26.481 "nvme_iov_md": false 00:08:26.481 }, 00:08:26.481 "memory_domains": [ 00:08:26.481 { 00:08:26.481 "dma_device_id": "system", 00:08:26.481 "dma_device_type": 1 00:08:26.481 }, 00:08:26.481 { 00:08:26.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.481 "dma_device_type": 2 00:08:26.481 } 00:08:26.481 ], 00:08:26.481 "driver_specific": {} 00:08:26.481 } 00:08:26.481 ] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.481 "name": "Existed_Raid", 00:08:26.481 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:26.481 "strip_size_kb": 64, 00:08:26.481 "state": "configuring", 00:08:26.481 "raid_level": "raid0", 00:08:26.481 "superblock": true, 00:08:26.481 "num_base_bdevs": 3, 00:08:26.481 "num_base_bdevs_discovered": 2, 00:08:26.481 "num_base_bdevs_operational": 3, 00:08:26.481 "base_bdevs_list": [ 00:08:26.481 { 00:08:26.481 "name": "BaseBdev1", 00:08:26.481 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:26.481 "is_configured": true, 00:08:26.481 "data_offset": 2048, 00:08:26.481 "data_size": 63488 00:08:26.481 }, 00:08:26.481 { 00:08:26.481 "name": null, 00:08:26.481 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:26.481 "is_configured": false, 00:08:26.481 "data_offset": 0, 00:08:26.481 "data_size": 63488 00:08:26.481 }, 00:08:26.481 { 00:08:26.481 "name": "BaseBdev3", 00:08:26.481 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:26.481 "is_configured": true, 00:08:26.481 "data_offset": 2048, 00:08:26.481 "data_size": 63488 00:08:26.481 } 00:08:26.481 ] 00:08:26.481 }' 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.481 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.742 [2024-11-26 22:53:05.829096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.742 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.003 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.003 "name": "Existed_Raid", 00:08:27.003 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:27.003 "strip_size_kb": 64, 00:08:27.003 "state": "configuring", 00:08:27.003 "raid_level": "raid0", 00:08:27.003 "superblock": true, 00:08:27.003 "num_base_bdevs": 3, 00:08:27.003 "num_base_bdevs_discovered": 1, 00:08:27.003 "num_base_bdevs_operational": 3, 00:08:27.003 "base_bdevs_list": [ 00:08:27.003 { 00:08:27.003 "name": "BaseBdev1", 00:08:27.003 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:27.003 "is_configured": true, 00:08:27.003 "data_offset": 2048, 00:08:27.003 "data_size": 63488 00:08:27.003 }, 00:08:27.003 { 00:08:27.003 "name": null, 00:08:27.003 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:27.003 "is_configured": false, 00:08:27.003 "data_offset": 0, 00:08:27.003 "data_size": 63488 00:08:27.003 }, 00:08:27.003 { 00:08:27.003 "name": null, 00:08:27.003 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:27.003 "is_configured": false, 00:08:27.003 "data_offset": 0, 00:08:27.003 "data_size": 63488 00:08:27.003 } 00:08:27.003 ] 00:08:27.003 }' 00:08:27.003 22:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.003 22:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.264 [2024-11-26 22:53:06.369311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.264 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.525 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.525 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.525 "name": "Existed_Raid", 00:08:27.525 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:27.525 "strip_size_kb": 64, 00:08:27.525 "state": "configuring", 00:08:27.525 "raid_level": "raid0", 00:08:27.525 "superblock": true, 00:08:27.525 "num_base_bdevs": 3, 00:08:27.525 "num_base_bdevs_discovered": 2, 00:08:27.525 "num_base_bdevs_operational": 3, 00:08:27.525 "base_bdevs_list": [ 00:08:27.525 { 00:08:27.525 "name": "BaseBdev1", 00:08:27.525 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:27.525 "is_configured": true, 00:08:27.525 "data_offset": 2048, 00:08:27.525 "data_size": 63488 00:08:27.525 }, 00:08:27.525 { 00:08:27.525 "name": null, 00:08:27.525 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:27.525 "is_configured": false, 00:08:27.525 "data_offset": 0, 00:08:27.525 "data_size": 63488 00:08:27.525 }, 00:08:27.525 { 00:08:27.525 "name": "BaseBdev3", 00:08:27.525 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:27.525 "is_configured": true, 00:08:27.525 "data_offset": 2048, 00:08:27.525 "data_size": 63488 00:08:27.525 } 00:08:27.525 ] 00:08:27.525 }' 00:08:27.525 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.525 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.785 [2024-11-26 22:53:06.849451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.785 "name": "Existed_Raid", 00:08:27.785 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:27.785 "strip_size_kb": 64, 00:08:27.785 "state": "configuring", 00:08:27.785 "raid_level": "raid0", 00:08:27.785 "superblock": true, 00:08:27.785 "num_base_bdevs": 3, 00:08:27.785 "num_base_bdevs_discovered": 1, 00:08:27.785 "num_base_bdevs_operational": 3, 00:08:27.785 "base_bdevs_list": [ 00:08:27.785 { 00:08:27.785 "name": null, 00:08:27.785 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:27.785 "is_configured": false, 00:08:27.785 "data_offset": 0, 00:08:27.785 "data_size": 63488 00:08:27.785 }, 00:08:27.785 { 00:08:27.785 "name": null, 00:08:27.785 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:27.785 "is_configured": false, 00:08:27.785 "data_offset": 0, 00:08:27.785 "data_size": 63488 00:08:27.785 }, 00:08:27.785 { 00:08:27.785 "name": "BaseBdev3", 00:08:27.785 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:27.785 "is_configured": true, 00:08:27.785 "data_offset": 2048, 00:08:27.785 "data_size": 63488 00:08:27.785 } 00:08:27.785 ] 00:08:27.785 }' 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.785 22:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.356 [2024-11-26 22:53:07.315992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.356 "name": "Existed_Raid", 00:08:28.356 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:28.356 "strip_size_kb": 64, 00:08:28.356 "state": "configuring", 00:08:28.356 "raid_level": "raid0", 00:08:28.356 "superblock": true, 00:08:28.356 "num_base_bdevs": 3, 00:08:28.356 "num_base_bdevs_discovered": 2, 00:08:28.356 "num_base_bdevs_operational": 3, 00:08:28.356 "base_bdevs_list": [ 00:08:28.356 { 00:08:28.356 "name": null, 00:08:28.356 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:28.356 "is_configured": false, 00:08:28.356 "data_offset": 0, 00:08:28.356 "data_size": 63488 00:08:28.356 }, 00:08:28.356 { 00:08:28.356 "name": "BaseBdev2", 00:08:28.356 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:28.356 "is_configured": true, 00:08:28.356 "data_offset": 2048, 00:08:28.356 "data_size": 63488 00:08:28.356 }, 00:08:28.356 { 00:08:28.356 "name": "BaseBdev3", 00:08:28.356 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:28.356 "is_configured": true, 00:08:28.356 "data_offset": 2048, 00:08:28.356 "data_size": 63488 00:08:28.356 } 00:08:28.356 ] 00:08:28.356 }' 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.356 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 313323d5-fbb2-4309-9baf-0fd231c6bf2f 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 [2024-11-26 22:53:07.850915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:28.926 [2024-11-26 22:53:07.851081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.926 [2024-11-26 22:53:07.851093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.926 [2024-11-26 22:53:07.851359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:28.926 [2024-11-26 22:53:07.851485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.926 [2024-11-26 22:53:07.851499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:28.926 NewBaseBdev 00:08:28.926 [2024-11-26 22:53:07.851594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.926 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 [ 00:08:28.926 { 00:08:28.926 "name": "NewBaseBdev", 00:08:28.926 "aliases": [ 00:08:28.926 "313323d5-fbb2-4309-9baf-0fd231c6bf2f" 00:08:28.926 ], 00:08:28.926 "product_name": "Malloc disk", 00:08:28.926 "block_size": 512, 00:08:28.926 "num_blocks": 65536, 00:08:28.926 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:28.926 "assigned_rate_limits": { 00:08:28.926 "rw_ios_per_sec": 0, 00:08:28.926 "rw_mbytes_per_sec": 0, 00:08:28.926 "r_mbytes_per_sec": 0, 00:08:28.926 "w_mbytes_per_sec": 0 00:08:28.926 }, 00:08:28.926 "claimed": true, 00:08:28.926 "claim_type": "exclusive_write", 00:08:28.926 "zoned": false, 00:08:28.926 "supported_io_types": { 00:08:28.926 "read": true, 00:08:28.926 "write": true, 00:08:28.926 "unmap": true, 00:08:28.926 "flush": true, 00:08:28.926 "reset": true, 00:08:28.926 "nvme_admin": false, 00:08:28.926 "nvme_io": false, 00:08:28.926 "nvme_io_md": false, 00:08:28.926 "write_zeroes": true, 00:08:28.926 "zcopy": true, 00:08:28.926 "get_zone_info": false, 00:08:28.926 "zone_management": false, 00:08:28.926 "zone_append": false, 00:08:28.926 "compare": false, 00:08:28.926 "compare_and_write": false, 00:08:28.926 "abort": true, 00:08:28.926 "seek_hole": false, 00:08:28.926 "seek_data": false, 00:08:28.926 "copy": true, 00:08:28.927 "nvme_iov_md": false 00:08:28.927 }, 00:08:28.927 "memory_domains": [ 00:08:28.927 { 00:08:28.927 "dma_device_id": "system", 00:08:28.927 "dma_device_type": 1 00:08:28.927 }, 00:08:28.927 { 00:08:28.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.927 "dma_device_type": 2 00:08:28.927 } 00:08:28.927 ], 00:08:28.927 "driver_specific": {} 00:08:28.927 } 00:08:28.927 ] 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.927 "name": "Existed_Raid", 00:08:28.927 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:28.927 "strip_size_kb": 64, 00:08:28.927 "state": "online", 00:08:28.927 "raid_level": "raid0", 00:08:28.927 "superblock": true, 00:08:28.927 "num_base_bdevs": 3, 00:08:28.927 "num_base_bdevs_discovered": 3, 00:08:28.927 "num_base_bdevs_operational": 3, 00:08:28.927 "base_bdevs_list": [ 00:08:28.927 { 00:08:28.927 "name": "NewBaseBdev", 00:08:28.927 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:28.927 "is_configured": true, 00:08:28.927 "data_offset": 2048, 00:08:28.927 "data_size": 63488 00:08:28.927 }, 00:08:28.927 { 00:08:28.927 "name": "BaseBdev2", 00:08:28.927 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:28.927 "is_configured": true, 00:08:28.927 "data_offset": 2048, 00:08:28.927 "data_size": 63488 00:08:28.927 }, 00:08:28.927 { 00:08:28.927 "name": "BaseBdev3", 00:08:28.927 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:28.927 "is_configured": true, 00:08:28.927 "data_offset": 2048, 00:08:28.927 "data_size": 63488 00:08:28.927 } 00:08:28.927 ] 00:08:28.927 }' 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.927 22:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 [2024-11-26 22:53:08.323392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.498 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.498 "name": "Existed_Raid", 00:08:29.498 "aliases": [ 00:08:29.498 "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8" 00:08:29.498 ], 00:08:29.498 "product_name": "Raid Volume", 00:08:29.498 "block_size": 512, 00:08:29.498 "num_blocks": 190464, 00:08:29.498 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:29.498 "assigned_rate_limits": { 00:08:29.498 "rw_ios_per_sec": 0, 00:08:29.498 "rw_mbytes_per_sec": 0, 00:08:29.498 "r_mbytes_per_sec": 0, 00:08:29.498 "w_mbytes_per_sec": 0 00:08:29.498 }, 00:08:29.498 "claimed": false, 00:08:29.498 "zoned": false, 00:08:29.498 "supported_io_types": { 00:08:29.498 "read": true, 00:08:29.498 "write": true, 00:08:29.498 "unmap": true, 00:08:29.498 "flush": true, 00:08:29.498 "reset": true, 00:08:29.498 "nvme_admin": false, 00:08:29.498 "nvme_io": false, 00:08:29.498 "nvme_io_md": false, 00:08:29.498 "write_zeroes": true, 00:08:29.498 "zcopy": false, 00:08:29.499 "get_zone_info": false, 00:08:29.499 "zone_management": false, 00:08:29.499 "zone_append": false, 00:08:29.499 "compare": false, 00:08:29.499 "compare_and_write": false, 00:08:29.499 "abort": false, 00:08:29.499 "seek_hole": false, 00:08:29.499 "seek_data": false, 00:08:29.499 "copy": false, 00:08:29.499 "nvme_iov_md": false 00:08:29.499 }, 00:08:29.499 "memory_domains": [ 00:08:29.499 { 00:08:29.499 "dma_device_id": "system", 00:08:29.499 "dma_device_type": 1 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.499 "dma_device_type": 2 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "dma_device_id": "system", 00:08:29.499 "dma_device_type": 1 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.499 "dma_device_type": 2 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "dma_device_id": "system", 00:08:29.499 "dma_device_type": 1 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.499 "dma_device_type": 2 00:08:29.499 } 00:08:29.499 ], 00:08:29.499 "driver_specific": { 00:08:29.499 "raid": { 00:08:29.499 "uuid": "974a258b-8067-4f17-8f0f-eb5ff3c1d1e8", 00:08:29.499 "strip_size_kb": 64, 00:08:29.499 "state": "online", 00:08:29.499 "raid_level": "raid0", 00:08:29.499 "superblock": true, 00:08:29.499 "num_base_bdevs": 3, 00:08:29.499 "num_base_bdevs_discovered": 3, 00:08:29.499 "num_base_bdevs_operational": 3, 00:08:29.499 "base_bdevs_list": [ 00:08:29.499 { 00:08:29.499 "name": "NewBaseBdev", 00:08:29.499 "uuid": "313323d5-fbb2-4309-9baf-0fd231c6bf2f", 00:08:29.499 "is_configured": true, 00:08:29.499 "data_offset": 2048, 00:08:29.499 "data_size": 63488 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "name": "BaseBdev2", 00:08:29.499 "uuid": "e6fb8a15-2ad0-4b07-a975-1979208aadbf", 00:08:29.499 "is_configured": true, 00:08:29.499 "data_offset": 2048, 00:08:29.499 "data_size": 63488 00:08:29.499 }, 00:08:29.499 { 00:08:29.499 "name": "BaseBdev3", 00:08:29.499 "uuid": "3a681564-21d9-4f7b-ab83-67a8afb138d0", 00:08:29.499 "is_configured": true, 00:08:29.499 "data_offset": 2048, 00:08:29.499 "data_size": 63488 00:08:29.499 } 00:08:29.499 ] 00:08:29.499 } 00:08:29.499 } 00:08:29.499 }' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:29.499 BaseBdev2 00:08:29.499 BaseBdev3' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.499 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 [2024-11-26 22:53:08.619163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.499 [2024-11-26 22:53:08.619192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.499 [2024-11-26 22:53:08.619299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.499 [2024-11-26 22:53:08.619357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.499 [2024-11-26 22:53:08.619375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77254 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77254 ']' 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77254 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77254 00:08:29.760 killing process with pid 77254 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77254' 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77254 00:08:29.760 [2024-11-26 22:53:08.657442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.760 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77254 00:08:29.760 [2024-11-26 22:53:08.687978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.020 22:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.020 00:08:30.021 real 0m8.513s 00:08:30.021 user 0m14.425s 00:08:30.021 sys 0m1.789s 00:08:30.021 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.021 22:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.021 ************************************ 00:08:30.021 END TEST raid_state_function_test_sb 00:08:30.021 ************************************ 00:08:30.021 22:53:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:30.021 22:53:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.021 22:53:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.021 22:53:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.021 ************************************ 00:08:30.021 START TEST raid_superblock_test 00:08:30.021 ************************************ 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77852 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77852 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77852 ']' 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.021 22:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.021 [2024-11-26 22:53:09.086988] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:30.021 [2024-11-26 22:53:09.087137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77852 ] 00:08:30.280 [2024-11-26 22:53:09.227793] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.280 [2024-11-26 22:53:09.264085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.280 [2024-11-26 22:53:09.289020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.280 [2024-11-26 22:53:09.331022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.280 [2024-11-26 22:53:09.331058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.848 malloc1 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.848 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.849 [2024-11-26 22:53:09.939703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.849 [2024-11-26 22:53:09.939802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.849 [2024-11-26 22:53:09.939842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.849 [2024-11-26 22:53:09.939882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.849 [2024-11-26 22:53:09.941927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.849 [2024-11-26 22:53:09.941993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.849 pt1 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.849 malloc2 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.849 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.849 [2024-11-26 22:53:09.972423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:30.849 [2024-11-26 22:53:09.972525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.849 [2024-11-26 22:53:09.972562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:30.849 [2024-11-26 22:53:09.972590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.109 [2024-11-26 22:53:09.974646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.109 [2024-11-26 22:53:09.974681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.109 pt2 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.109 malloc3 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.109 22:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.109 [2024-11-26 22:53:10.001214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:31.109 [2024-11-26 22:53:10.001324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.109 [2024-11-26 22:53:10.001363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:31.109 [2024-11-26 22:53:10.001393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.109 [2024-11-26 22:53:10.003511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.109 [2024-11-26 22:53:10.003579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:31.109 pt3 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.109 [2024-11-26 22:53:10.013278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.109 [2024-11-26 22:53:10.015117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.109 [2024-11-26 22:53:10.015216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:31.109 [2024-11-26 22:53:10.015399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:31.109 [2024-11-26 22:53:10.015448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.109 [2024-11-26 22:53:10.015729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.109 [2024-11-26 22:53:10.015902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:31.109 [2024-11-26 22:53:10.015942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:31.109 [2024-11-26 22:53:10.016110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.109 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.110 "name": "raid_bdev1", 00:08:31.110 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:31.110 "strip_size_kb": 64, 00:08:31.110 "state": "online", 00:08:31.110 "raid_level": "raid0", 00:08:31.110 "superblock": true, 00:08:31.110 "num_base_bdevs": 3, 00:08:31.110 "num_base_bdevs_discovered": 3, 00:08:31.110 "num_base_bdevs_operational": 3, 00:08:31.110 "base_bdevs_list": [ 00:08:31.110 { 00:08:31.110 "name": "pt1", 00:08:31.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.110 "is_configured": true, 00:08:31.110 "data_offset": 2048, 00:08:31.110 "data_size": 63488 00:08:31.110 }, 00:08:31.110 { 00:08:31.110 "name": "pt2", 00:08:31.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.110 "is_configured": true, 00:08:31.110 "data_offset": 2048, 00:08:31.110 "data_size": 63488 00:08:31.110 }, 00:08:31.110 { 00:08:31.110 "name": "pt3", 00:08:31.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.110 "is_configured": true, 00:08:31.110 "data_offset": 2048, 00:08:31.110 "data_size": 63488 00:08:31.110 } 00:08:31.110 ] 00:08:31.110 }' 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.110 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.370 [2024-11-26 22:53:10.461628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.370 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.631 "name": "raid_bdev1", 00:08:31.631 "aliases": [ 00:08:31.631 "4218ead4-aad0-47a6-9158-96e0cb6247d2" 00:08:31.631 ], 00:08:31.631 "product_name": "Raid Volume", 00:08:31.631 "block_size": 512, 00:08:31.631 "num_blocks": 190464, 00:08:31.631 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:31.631 "assigned_rate_limits": { 00:08:31.631 "rw_ios_per_sec": 0, 00:08:31.631 "rw_mbytes_per_sec": 0, 00:08:31.631 "r_mbytes_per_sec": 0, 00:08:31.631 "w_mbytes_per_sec": 0 00:08:31.631 }, 00:08:31.631 "claimed": false, 00:08:31.631 "zoned": false, 00:08:31.631 "supported_io_types": { 00:08:31.631 "read": true, 00:08:31.631 "write": true, 00:08:31.631 "unmap": true, 00:08:31.631 "flush": true, 00:08:31.631 "reset": true, 00:08:31.631 "nvme_admin": false, 00:08:31.631 "nvme_io": false, 00:08:31.631 "nvme_io_md": false, 00:08:31.631 "write_zeroes": true, 00:08:31.631 "zcopy": false, 00:08:31.631 "get_zone_info": false, 00:08:31.631 "zone_management": false, 00:08:31.631 "zone_append": false, 00:08:31.631 "compare": false, 00:08:31.631 "compare_and_write": false, 00:08:31.631 "abort": false, 00:08:31.631 "seek_hole": false, 00:08:31.631 "seek_data": false, 00:08:31.631 "copy": false, 00:08:31.631 "nvme_iov_md": false 00:08:31.631 }, 00:08:31.631 "memory_domains": [ 00:08:31.631 { 00:08:31.631 "dma_device_id": "system", 00:08:31.631 "dma_device_type": 1 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.631 "dma_device_type": 2 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "dma_device_id": "system", 00:08:31.631 "dma_device_type": 1 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.631 "dma_device_type": 2 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "dma_device_id": "system", 00:08:31.631 "dma_device_type": 1 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.631 "dma_device_type": 2 00:08:31.631 } 00:08:31.631 ], 00:08:31.631 "driver_specific": { 00:08:31.631 "raid": { 00:08:31.631 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:31.631 "strip_size_kb": 64, 00:08:31.631 "state": "online", 00:08:31.631 "raid_level": "raid0", 00:08:31.631 "superblock": true, 00:08:31.631 "num_base_bdevs": 3, 00:08:31.631 "num_base_bdevs_discovered": 3, 00:08:31.631 "num_base_bdevs_operational": 3, 00:08:31.631 "base_bdevs_list": [ 00:08:31.631 { 00:08:31.631 "name": "pt1", 00:08:31.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.631 "is_configured": true, 00:08:31.631 "data_offset": 2048, 00:08:31.631 "data_size": 63488 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "name": "pt2", 00:08:31.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.631 "is_configured": true, 00:08:31.631 "data_offset": 2048, 00:08:31.631 "data_size": 63488 00:08:31.631 }, 00:08:31.631 { 00:08:31.631 "name": "pt3", 00:08:31.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.631 "is_configured": true, 00:08:31.631 "data_offset": 2048, 00:08:31.631 "data_size": 63488 00:08:31.631 } 00:08:31.631 ] 00:08:31.631 } 00:08:31.631 } 00:08:31.631 }' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.631 pt2 00:08:31.631 pt3' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.631 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 [2024-11-26 22:53:10.713677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4218ead4-aad0-47a6-9158-96e0cb6247d2 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4218ead4-aad0-47a6-9158-96e0cb6247d2 ']' 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 [2024-11-26 22:53:10.745393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.632 [2024-11-26 22:53:10.745416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.632 [2024-11-26 22:53:10.745495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.632 [2024-11-26 22:53:10.745564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.632 [2024-11-26 22:53:10.745577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.892 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 [2024-11-26 22:53:10.873461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:31.893 [2024-11-26 22:53:10.875317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:31.893 [2024-11-26 22:53:10.875405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:31.893 [2024-11-26 22:53:10.875466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:31.893 [2024-11-26 22:53:10.875570] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:31.893 [2024-11-26 22:53:10.875634] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:31.893 [2024-11-26 22:53:10.875675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.893 [2024-11-26 22:53:10.875689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:31.893 request: 00:08:31.893 { 00:08:31.893 "name": "raid_bdev1", 00:08:31.893 "raid_level": "raid0", 00:08:31.893 "base_bdevs": [ 00:08:31.893 "malloc1", 00:08:31.893 "malloc2", 00:08:31.893 "malloc3" 00:08:31.893 ], 00:08:31.893 "strip_size_kb": 64, 00:08:31.893 "superblock": false, 00:08:31.893 "method": "bdev_raid_create", 00:08:31.893 "req_id": 1 00:08:31.893 } 00:08:31.893 Got JSON-RPC error response 00:08:31.893 response: 00:08:31.893 { 00:08:31.893 "code": -17, 00:08:31.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:31.893 } 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 [2024-11-26 22:53:10.933442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:31.893 [2024-11-26 22:53:10.933526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.893 [2024-11-26 22:53:10.933560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:31.893 [2024-11-26 22:53:10.933583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.893 [2024-11-26 22:53:10.935668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.893 [2024-11-26 22:53:10.935732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:31.893 [2024-11-26 22:53:10.935816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:31.893 [2024-11-26 22:53:10.935867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.893 pt1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.893 "name": "raid_bdev1", 00:08:31.893 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:31.893 "strip_size_kb": 64, 00:08:31.893 "state": "configuring", 00:08:31.893 "raid_level": "raid0", 00:08:31.893 "superblock": true, 00:08:31.893 "num_base_bdevs": 3, 00:08:31.893 "num_base_bdevs_discovered": 1, 00:08:31.893 "num_base_bdevs_operational": 3, 00:08:31.893 "base_bdevs_list": [ 00:08:31.893 { 00:08:31.893 "name": "pt1", 00:08:31.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.893 "is_configured": true, 00:08:31.893 "data_offset": 2048, 00:08:31.893 "data_size": 63488 00:08:31.893 }, 00:08:31.893 { 00:08:31.893 "name": null, 00:08:31.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.893 "is_configured": false, 00:08:31.893 "data_offset": 2048, 00:08:31.893 "data_size": 63488 00:08:31.893 }, 00:08:31.893 { 00:08:31.893 "name": null, 00:08:31.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.893 "is_configured": false, 00:08:31.893 "data_offset": 2048, 00:08:31.893 "data_size": 63488 00:08:31.893 } 00:08:31.893 ] 00:08:31.893 }' 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.893 22:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.463 [2024-11-26 22:53:11.393583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.463 [2024-11-26 22:53:11.393692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.463 [2024-11-26 22:53:11.393732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:32.463 [2024-11-26 22:53:11.393759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.463 [2024-11-26 22:53:11.394188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.463 [2024-11-26 22:53:11.394244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.463 [2024-11-26 22:53:11.394358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.463 [2024-11-26 22:53:11.394413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.463 pt2 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.463 [2024-11-26 22:53:11.405612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.463 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.463 "name": "raid_bdev1", 00:08:32.463 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:32.463 "strip_size_kb": 64, 00:08:32.463 "state": "configuring", 00:08:32.463 "raid_level": "raid0", 00:08:32.463 "superblock": true, 00:08:32.463 "num_base_bdevs": 3, 00:08:32.463 "num_base_bdevs_discovered": 1, 00:08:32.463 "num_base_bdevs_operational": 3, 00:08:32.463 "base_bdevs_list": [ 00:08:32.463 { 00:08:32.463 "name": "pt1", 00:08:32.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.463 "is_configured": true, 00:08:32.463 "data_offset": 2048, 00:08:32.463 "data_size": 63488 00:08:32.463 }, 00:08:32.463 { 00:08:32.463 "name": null, 00:08:32.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.463 "is_configured": false, 00:08:32.463 "data_offset": 0, 00:08:32.463 "data_size": 63488 00:08:32.463 }, 00:08:32.463 { 00:08:32.463 "name": null, 00:08:32.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.463 "is_configured": false, 00:08:32.464 "data_offset": 2048, 00:08:32.464 "data_size": 63488 00:08:32.464 } 00:08:32.464 ] 00:08:32.464 }' 00:08:32.464 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.464 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 [2024-11-26 22:53:11.857718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.033 [2024-11-26 22:53:11.857825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.033 [2024-11-26 22:53:11.857860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:33.033 [2024-11-26 22:53:11.857890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.033 [2024-11-26 22:53:11.858346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.033 [2024-11-26 22:53:11.858403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.033 [2024-11-26 22:53:11.858500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.033 [2024-11-26 22:53:11.858552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.033 pt2 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 [2024-11-26 22:53:11.869681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:33.033 [2024-11-26 22:53:11.869763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.033 [2024-11-26 22:53:11.869790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:33.033 [2024-11-26 22:53:11.869817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.033 [2024-11-26 22:53:11.870120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.033 [2024-11-26 22:53:11.870201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:33.033 [2024-11-26 22:53:11.870288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:33.033 [2024-11-26 22:53:11.870336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:33.033 [2024-11-26 22:53:11.870446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:33.033 [2024-11-26 22:53:11.870494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.033 [2024-11-26 22:53:11.870728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:33.033 [2024-11-26 22:53:11.870866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:33.033 [2024-11-26 22:53:11.870902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:33.033 [2024-11-26 22:53:11.871002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.033 pt3 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.033 "name": "raid_bdev1", 00:08:33.033 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:33.033 "strip_size_kb": 64, 00:08:33.033 "state": "online", 00:08:33.033 "raid_level": "raid0", 00:08:33.033 "superblock": true, 00:08:33.033 "num_base_bdevs": 3, 00:08:33.033 "num_base_bdevs_discovered": 3, 00:08:33.033 "num_base_bdevs_operational": 3, 00:08:33.033 "base_bdevs_list": [ 00:08:33.033 { 00:08:33.033 "name": "pt1", 00:08:33.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.033 "is_configured": true, 00:08:33.033 "data_offset": 2048, 00:08:33.033 "data_size": 63488 00:08:33.033 }, 00:08:33.033 { 00:08:33.033 "name": "pt2", 00:08:33.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.033 "is_configured": true, 00:08:33.033 "data_offset": 2048, 00:08:33.033 "data_size": 63488 00:08:33.033 }, 00:08:33.033 { 00:08:33.033 "name": "pt3", 00:08:33.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.033 "is_configured": true, 00:08:33.033 "data_offset": 2048, 00:08:33.033 "data_size": 63488 00:08:33.033 } 00:08:33.033 ] 00:08:33.033 }' 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.033 22:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.293 [2024-11-26 22:53:12.346125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.293 "name": "raid_bdev1", 00:08:33.293 "aliases": [ 00:08:33.293 "4218ead4-aad0-47a6-9158-96e0cb6247d2" 00:08:33.293 ], 00:08:33.293 "product_name": "Raid Volume", 00:08:33.293 "block_size": 512, 00:08:33.293 "num_blocks": 190464, 00:08:33.293 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:33.293 "assigned_rate_limits": { 00:08:33.293 "rw_ios_per_sec": 0, 00:08:33.293 "rw_mbytes_per_sec": 0, 00:08:33.293 "r_mbytes_per_sec": 0, 00:08:33.293 "w_mbytes_per_sec": 0 00:08:33.293 }, 00:08:33.293 "claimed": false, 00:08:33.293 "zoned": false, 00:08:33.293 "supported_io_types": { 00:08:33.293 "read": true, 00:08:33.293 "write": true, 00:08:33.293 "unmap": true, 00:08:33.293 "flush": true, 00:08:33.293 "reset": true, 00:08:33.293 "nvme_admin": false, 00:08:33.293 "nvme_io": false, 00:08:33.293 "nvme_io_md": false, 00:08:33.293 "write_zeroes": true, 00:08:33.293 "zcopy": false, 00:08:33.293 "get_zone_info": false, 00:08:33.293 "zone_management": false, 00:08:33.293 "zone_append": false, 00:08:33.293 "compare": false, 00:08:33.293 "compare_and_write": false, 00:08:33.293 "abort": false, 00:08:33.293 "seek_hole": false, 00:08:33.293 "seek_data": false, 00:08:33.293 "copy": false, 00:08:33.293 "nvme_iov_md": false 00:08:33.293 }, 00:08:33.293 "memory_domains": [ 00:08:33.293 { 00:08:33.293 "dma_device_id": "system", 00:08:33.293 "dma_device_type": 1 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.293 "dma_device_type": 2 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "dma_device_id": "system", 00:08:33.293 "dma_device_type": 1 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.293 "dma_device_type": 2 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "dma_device_id": "system", 00:08:33.293 "dma_device_type": 1 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.293 "dma_device_type": 2 00:08:33.293 } 00:08:33.293 ], 00:08:33.293 "driver_specific": { 00:08:33.293 "raid": { 00:08:33.293 "uuid": "4218ead4-aad0-47a6-9158-96e0cb6247d2", 00:08:33.293 "strip_size_kb": 64, 00:08:33.293 "state": "online", 00:08:33.293 "raid_level": "raid0", 00:08:33.293 "superblock": true, 00:08:33.293 "num_base_bdevs": 3, 00:08:33.293 "num_base_bdevs_discovered": 3, 00:08:33.293 "num_base_bdevs_operational": 3, 00:08:33.293 "base_bdevs_list": [ 00:08:33.293 { 00:08:33.293 "name": "pt1", 00:08:33.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.293 "is_configured": true, 00:08:33.293 "data_offset": 2048, 00:08:33.293 "data_size": 63488 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "name": "pt2", 00:08:33.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.293 "is_configured": true, 00:08:33.293 "data_offset": 2048, 00:08:33.293 "data_size": 63488 00:08:33.293 }, 00:08:33.293 { 00:08:33.293 "name": "pt3", 00:08:33.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.293 "is_configured": true, 00:08:33.293 "data_offset": 2048, 00:08:33.293 "data_size": 63488 00:08:33.293 } 00:08:33.293 ] 00:08:33.293 } 00:08:33.293 } 00:08:33.293 }' 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.293 pt2 00:08:33.293 pt3' 00:08:33.293 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:33.554 [2024-11-26 22:53:12.594160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4218ead4-aad0-47a6-9158-96e0cb6247d2 '!=' 4218ead4-aad0-47a6-9158-96e0cb6247d2 ']' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77852 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77852 ']' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77852 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77852 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77852' 00:08:33.554 killing process with pid 77852 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77852 00:08:33.554 [2024-11-26 22:53:12.669106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.554 [2024-11-26 22:53:12.669259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.554 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77852 00:08:33.554 [2024-11-26 22:53:12.669348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.554 [2024-11-26 22:53:12.669362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:33.813 [2024-11-26 22:53:12.702671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.813 22:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:33.813 00:08:33.813 real 0m3.937s 00:08:33.814 user 0m6.170s 00:08:33.814 sys 0m0.899s 00:08:33.814 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.814 22:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.814 ************************************ 00:08:33.814 END TEST raid_superblock_test 00:08:33.814 ************************************ 00:08:34.073 22:53:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:34.073 22:53:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.073 22:53:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.073 22:53:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.073 ************************************ 00:08:34.073 START TEST raid_read_error_test 00:08:34.073 ************************************ 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.073 22:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CqqOWPL0cU 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78094 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78094 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78094 ']' 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.073 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.073 [2024-11-26 22:53:13.109152] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:34.073 [2024-11-26 22:53:13.109365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78094 ] 00:08:34.333 [2024-11-26 22:53:13.252603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.333 [2024-11-26 22:53:13.290419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.333 [2024-11-26 22:53:13.316396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.333 [2024-11-26 22:53:13.357571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.333 [2024-11-26 22:53:13.357614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 BaseBdev1_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 true 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 [2024-11-26 22:53:13.969813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:34.903 [2024-11-26 22:53:13.969890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.903 [2024-11-26 22:53:13.969910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:34.903 [2024-11-26 22:53:13.969924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.903 [2024-11-26 22:53:13.972010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.903 [2024-11-26 22:53:13.972048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:34.903 BaseBdev1 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 BaseBdev2_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 true 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.903 [2024-11-26 22:53:14.010206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:34.903 [2024-11-26 22:53:14.010258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.903 [2024-11-26 22:53:14.010274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:34.903 [2024-11-26 22:53:14.010284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.903 [2024-11-26 22:53:14.012283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.903 [2024-11-26 22:53:14.012311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:34.903 BaseBdev2 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.903 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 BaseBdev3_malloc 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 true 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 [2024-11-26 22:53:14.050682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:35.165 [2024-11-26 22:53:14.050726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.165 [2024-11-26 22:53:14.050743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.165 [2024-11-26 22:53:14.050753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.165 [2024-11-26 22:53:14.052729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.165 [2024-11-26 22:53:14.052762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:35.165 BaseBdev3 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 [2024-11-26 22:53:14.062764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.165 [2024-11-26 22:53:14.064555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.165 [2024-11-26 22:53:14.064640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.165 [2024-11-26 22:53:14.064805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.165 [2024-11-26 22:53:14.064828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.165 [2024-11-26 22:53:14.065057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:35.165 [2024-11-26 22:53:14.065201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.165 [2024-11-26 22:53:14.065220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:35.165 [2024-11-26 22:53:14.065341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.165 "name": "raid_bdev1", 00:08:35.165 "uuid": "4c7c5cf5-5ddb-4392-aa7a-a186a55681da", 00:08:35.165 "strip_size_kb": 64, 00:08:35.165 "state": "online", 00:08:35.165 "raid_level": "raid0", 00:08:35.165 "superblock": true, 00:08:35.165 "num_base_bdevs": 3, 00:08:35.165 "num_base_bdevs_discovered": 3, 00:08:35.165 "num_base_bdevs_operational": 3, 00:08:35.165 "base_bdevs_list": [ 00:08:35.165 { 00:08:35.165 "name": "BaseBdev1", 00:08:35.165 "uuid": "d46a7166-f43d-55fd-8ab7-eccc05f0e27b", 00:08:35.165 "is_configured": true, 00:08:35.165 "data_offset": 2048, 00:08:35.165 "data_size": 63488 00:08:35.165 }, 00:08:35.165 { 00:08:35.165 "name": "BaseBdev2", 00:08:35.165 "uuid": "3f12ae72-cd52-5424-bd09-10d6618f6812", 00:08:35.165 "is_configured": true, 00:08:35.165 "data_offset": 2048, 00:08:35.165 "data_size": 63488 00:08:35.165 }, 00:08:35.165 { 00:08:35.165 "name": "BaseBdev3", 00:08:35.165 "uuid": "024d22d0-d55b-574c-bfe7-2e79f7de8639", 00:08:35.165 "is_configured": true, 00:08:35.165 "data_offset": 2048, 00:08:35.165 "data_size": 63488 00:08:35.165 } 00:08:35.165 ] 00:08:35.165 }' 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.165 22:53:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.476 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.476 22:53:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:35.736 [2024-11-26 22:53:14.587339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.673 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.673 "name": "raid_bdev1", 00:08:36.673 "uuid": "4c7c5cf5-5ddb-4392-aa7a-a186a55681da", 00:08:36.673 "strip_size_kb": 64, 00:08:36.673 "state": "online", 00:08:36.673 "raid_level": "raid0", 00:08:36.673 "superblock": true, 00:08:36.673 "num_base_bdevs": 3, 00:08:36.673 "num_base_bdevs_discovered": 3, 00:08:36.673 "num_base_bdevs_operational": 3, 00:08:36.673 "base_bdevs_list": [ 00:08:36.673 { 00:08:36.673 "name": "BaseBdev1", 00:08:36.673 "uuid": "d46a7166-f43d-55fd-8ab7-eccc05f0e27b", 00:08:36.673 "is_configured": true, 00:08:36.673 "data_offset": 2048, 00:08:36.673 "data_size": 63488 00:08:36.673 }, 00:08:36.673 { 00:08:36.673 "name": "BaseBdev2", 00:08:36.673 "uuid": "3f12ae72-cd52-5424-bd09-10d6618f6812", 00:08:36.673 "is_configured": true, 00:08:36.673 "data_offset": 2048, 00:08:36.673 "data_size": 63488 00:08:36.673 }, 00:08:36.673 { 00:08:36.674 "name": "BaseBdev3", 00:08:36.674 "uuid": "024d22d0-d55b-574c-bfe7-2e79f7de8639", 00:08:36.674 "is_configured": true, 00:08:36.674 "data_offset": 2048, 00:08:36.674 "data_size": 63488 00:08:36.674 } 00:08:36.674 ] 00:08:36.674 }' 00:08:36.674 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.674 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.934 [2024-11-26 22:53:15.933517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.934 [2024-11-26 22:53:15.933567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.934 [2024-11-26 22:53:15.936142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.934 [2024-11-26 22:53:15.936191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.934 [2024-11-26 22:53:15.936226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.934 [2024-11-26 22:53:15.936241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:36.934 { 00:08:36.934 "results": [ 00:08:36.934 { 00:08:36.934 "job": "raid_bdev1", 00:08:36.934 "core_mask": "0x1", 00:08:36.934 "workload": "randrw", 00:08:36.934 "percentage": 50, 00:08:36.934 "status": "finished", 00:08:36.934 "queue_depth": 1, 00:08:36.934 "io_size": 131072, 00:08:36.934 "runtime": 1.34435, 00:08:36.934 "iops": 17094.50663889612, 00:08:36.934 "mibps": 2136.813329862015, 00:08:36.934 "io_failed": 1, 00:08:36.934 "io_timeout": 0, 00:08:36.934 "avg_latency_us": 80.71173690834014, 00:08:36.934 "min_latency_us": 24.656149219907608, 00:08:36.934 "max_latency_us": 1328.085069293123 00:08:36.934 } 00:08:36.934 ], 00:08:36.934 "core_count": 1 00:08:36.934 } 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78094 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78094 ']' 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78094 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78094 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78094' 00:08:36.934 killing process with pid 78094 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78094 00:08:36.934 [2024-11-26 22:53:15.982576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.934 22:53:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78094 00:08:36.934 [2024-11-26 22:53:16.007766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CqqOWPL0cU 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:37.195 00:08:37.195 real 0m3.233s 00:08:37.195 user 0m4.042s 00:08:37.195 sys 0m0.581s 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.195 22:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 ************************************ 00:08:37.195 END TEST raid_read_error_test 00:08:37.195 ************************************ 00:08:37.195 22:53:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:37.195 22:53:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.195 22:53:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.195 22:53:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 ************************************ 00:08:37.195 START TEST raid_write_error_test 00:08:37.195 ************************************ 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:37.195 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZUzH64jikS 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78223 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78223 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78223 ']' 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.456 22:53:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.456 [2024-11-26 22:53:16.405838] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:37.456 [2024-11-26 22:53:16.405972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78223 ] 00:08:37.456 [2024-11-26 22:53:16.540394] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:37.456 [2024-11-26 22:53:16.579161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.715 [2024-11-26 22:53:16.604683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.715 [2024-11-26 22:53:16.647450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.715 [2024-11-26 22:53:16.647494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 BaseBdev1_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 true 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 [2024-11-26 22:53:17.284664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:38.286 [2024-11-26 22:53:17.284714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.286 [2024-11-26 22:53:17.284729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:38.286 [2024-11-26 22:53:17.284748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.286 [2024-11-26 22:53:17.286853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.286 [2024-11-26 22:53:17.286889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:38.286 BaseBdev1 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 BaseBdev2_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 true 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.286 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 [2024-11-26 22:53:17.324908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:38.286 [2024-11-26 22:53:17.324949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.286 [2024-11-26 22:53:17.324963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:38.286 [2024-11-26 22:53:17.324972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.286 [2024-11-26 22:53:17.326960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.286 [2024-11-26 22:53:17.326993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:38.287 BaseBdev2 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 BaseBdev3_malloc 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 true 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 [2024-11-26 22:53:17.365182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:38.287 [2024-11-26 22:53:17.365224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.287 [2024-11-26 22:53:17.365238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:38.287 [2024-11-26 22:53:17.365258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.287 [2024-11-26 22:53:17.367217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.287 [2024-11-26 22:53:17.367262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:38.287 BaseBdev3 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 [2024-11-26 22:53:17.377233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.287 [2024-11-26 22:53:17.378938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.287 [2024-11-26 22:53:17.379005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.287 [2024-11-26 22:53:17.379166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.287 [2024-11-26 22:53:17.379177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:38.287 [2024-11-26 22:53:17.379423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:38.287 [2024-11-26 22:53:17.379568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.287 [2024-11-26 22:53:17.379592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:38.287 [2024-11-26 22:53:17.379689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.547 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.547 "name": "raid_bdev1", 00:08:38.547 "uuid": "725993bd-e377-4424-838e-25b9c1c469d9", 00:08:38.547 "strip_size_kb": 64, 00:08:38.547 "state": "online", 00:08:38.547 "raid_level": "raid0", 00:08:38.547 "superblock": true, 00:08:38.547 "num_base_bdevs": 3, 00:08:38.547 "num_base_bdevs_discovered": 3, 00:08:38.547 "num_base_bdevs_operational": 3, 00:08:38.547 "base_bdevs_list": [ 00:08:38.547 { 00:08:38.547 "name": "BaseBdev1", 00:08:38.547 "uuid": "2219d6cd-b186-5ae3-a2dd-c734fc4155cd", 00:08:38.547 "is_configured": true, 00:08:38.547 "data_offset": 2048, 00:08:38.547 "data_size": 63488 00:08:38.547 }, 00:08:38.547 { 00:08:38.547 "name": "BaseBdev2", 00:08:38.547 "uuid": "8b36ddbe-90d4-5cfd-addc-a5e546ca26e3", 00:08:38.547 "is_configured": true, 00:08:38.547 "data_offset": 2048, 00:08:38.547 "data_size": 63488 00:08:38.547 }, 00:08:38.547 { 00:08:38.547 "name": "BaseBdev3", 00:08:38.547 "uuid": "5c3d2a5a-2019-596d-8cb5-9b6dd586c297", 00:08:38.547 "is_configured": true, 00:08:38.547 "data_offset": 2048, 00:08:38.547 "data_size": 63488 00:08:38.547 } 00:08:38.547 ] 00:08:38.547 }' 00:08:38.547 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.547 22:53:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.807 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.807 22:53:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:38.807 [2024-11-26 22:53:17.921798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.745 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.005 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.005 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.005 "name": "raid_bdev1", 00:08:40.005 "uuid": "725993bd-e377-4424-838e-25b9c1c469d9", 00:08:40.005 "strip_size_kb": 64, 00:08:40.005 "state": "online", 00:08:40.005 "raid_level": "raid0", 00:08:40.005 "superblock": true, 00:08:40.005 "num_base_bdevs": 3, 00:08:40.005 "num_base_bdevs_discovered": 3, 00:08:40.005 "num_base_bdevs_operational": 3, 00:08:40.005 "base_bdevs_list": [ 00:08:40.005 { 00:08:40.005 "name": "BaseBdev1", 00:08:40.005 "uuid": "2219d6cd-b186-5ae3-a2dd-c734fc4155cd", 00:08:40.005 "is_configured": true, 00:08:40.005 "data_offset": 2048, 00:08:40.005 "data_size": 63488 00:08:40.005 }, 00:08:40.005 { 00:08:40.005 "name": "BaseBdev2", 00:08:40.005 "uuid": "8b36ddbe-90d4-5cfd-addc-a5e546ca26e3", 00:08:40.005 "is_configured": true, 00:08:40.005 "data_offset": 2048, 00:08:40.005 "data_size": 63488 00:08:40.005 }, 00:08:40.005 { 00:08:40.005 "name": "BaseBdev3", 00:08:40.005 "uuid": "5c3d2a5a-2019-596d-8cb5-9b6dd586c297", 00:08:40.005 "is_configured": true, 00:08:40.005 "data_offset": 2048, 00:08:40.005 "data_size": 63488 00:08:40.005 } 00:08:40.005 ] 00:08:40.005 }' 00:08:40.005 22:53:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.005 22:53:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.264 [2024-11-26 22:53:19.308259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.264 [2024-11-26 22:53:19.308318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.264 [2024-11-26 22:53:19.310791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.264 [2024-11-26 22:53:19.310839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.264 [2024-11-26 22:53:19.310875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.264 [2024-11-26 22:53:19.310885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:40.264 { 00:08:40.264 "results": [ 00:08:40.264 { 00:08:40.264 "job": "raid_bdev1", 00:08:40.264 "core_mask": "0x1", 00:08:40.264 "workload": "randrw", 00:08:40.264 "percentage": 50, 00:08:40.264 "status": "finished", 00:08:40.264 "queue_depth": 1, 00:08:40.264 "io_size": 131072, 00:08:40.264 "runtime": 1.384668, 00:08:40.264 "iops": 17014.908989013973, 00:08:40.264 "mibps": 2126.8636236267466, 00:08:40.264 "io_failed": 1, 00:08:40.264 "io_timeout": 0, 00:08:40.264 "avg_latency_us": 81.04566905602289, 00:08:40.264 "min_latency_us": 19.524100061012813, 00:08:40.264 "max_latency_us": 1385.2070077573433 00:08:40.264 } 00:08:40.264 ], 00:08:40.264 "core_count": 1 00:08:40.264 } 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78223 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78223 ']' 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78223 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78223 00:08:40.264 killing process with pid 78223 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78223' 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78223 00:08:40.264 [2024-11-26 22:53:19.356118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.264 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78223 00:08:40.264 [2024-11-26 22:53:19.381589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZUzH64jikS 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:40.523 00:08:40.523 real 0m3.302s 00:08:40.523 user 0m4.203s 00:08:40.523 sys 0m0.542s 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.523 22:53:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.523 ************************************ 00:08:40.524 END TEST raid_write_error_test 00:08:40.524 ************************************ 00:08:40.783 22:53:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:40.783 22:53:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:40.783 22:53:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.783 22:53:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.783 22:53:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 ************************************ 00:08:40.783 START TEST raid_state_function_test 00:08:40.783 ************************************ 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78350 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78350' 00:08:40.783 Process raid pid: 78350 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78350 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78350 ']' 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.783 22:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [2024-11-26 22:53:19.784272] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:40.783 [2024-11-26 22:53:19.784410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.042 [2024-11-26 22:53:19.926492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.042 [2024-11-26 22:53:19.966684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.042 [2024-11-26 22:53:19.992207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.042 [2024-11-26 22:53:20.033760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.042 [2024-11-26 22:53:20.033793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 [2024-11-26 22:53:20.621296] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.609 [2024-11-26 22:53:20.621349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.609 [2024-11-26 22:53:20.621363] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.609 [2024-11-26 22:53:20.621371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.609 [2024-11-26 22:53:20.621383] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.609 [2024-11-26 22:53:20.621391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.609 "name": "Existed_Raid", 00:08:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.609 "strip_size_kb": 64, 00:08:41.609 "state": "configuring", 00:08:41.609 "raid_level": "concat", 00:08:41.609 "superblock": false, 00:08:41.609 "num_base_bdevs": 3, 00:08:41.609 "num_base_bdevs_discovered": 0, 00:08:41.609 "num_base_bdevs_operational": 3, 00:08:41.609 "base_bdevs_list": [ 00:08:41.609 { 00:08:41.609 "name": "BaseBdev1", 00:08:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.609 "is_configured": false, 00:08:41.609 "data_offset": 0, 00:08:41.609 "data_size": 0 00:08:41.609 }, 00:08:41.609 { 00:08:41.609 "name": "BaseBdev2", 00:08:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.609 "is_configured": false, 00:08:41.609 "data_offset": 0, 00:08:41.609 "data_size": 0 00:08:41.609 }, 00:08:41.609 { 00:08:41.609 "name": "BaseBdev3", 00:08:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.609 "is_configured": false, 00:08:41.609 "data_offset": 0, 00:08:41.609 "data_size": 0 00:08:41.609 } 00:08:41.609 ] 00:08:41.609 }' 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.609 22:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.195 [2024-11-26 22:53:21.113336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.195 [2024-11-26 22:53:21.113372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.195 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.195 [2024-11-26 22:53:21.125370] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.196 [2024-11-26 22:53:21.125420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.196 [2024-11-26 22:53:21.125430] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.196 [2024-11-26 22:53:21.125437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.196 [2024-11-26 22:53:21.125445] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.196 [2024-11-26 22:53:21.125453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.196 [2024-11-26 22:53:21.146133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.196 BaseBdev1 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.196 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.196 [ 00:08:42.196 { 00:08:42.196 "name": "BaseBdev1", 00:08:42.196 "aliases": [ 00:08:42.196 "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940" 00:08:42.196 ], 00:08:42.197 "product_name": "Malloc disk", 00:08:42.197 "block_size": 512, 00:08:42.197 "num_blocks": 65536, 00:08:42.197 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:42.197 "assigned_rate_limits": { 00:08:42.197 "rw_ios_per_sec": 0, 00:08:42.197 "rw_mbytes_per_sec": 0, 00:08:42.197 "r_mbytes_per_sec": 0, 00:08:42.197 "w_mbytes_per_sec": 0 00:08:42.197 }, 00:08:42.197 "claimed": true, 00:08:42.197 "claim_type": "exclusive_write", 00:08:42.197 "zoned": false, 00:08:42.197 "supported_io_types": { 00:08:42.197 "read": true, 00:08:42.197 "write": true, 00:08:42.197 "unmap": true, 00:08:42.197 "flush": true, 00:08:42.197 "reset": true, 00:08:42.197 "nvme_admin": false, 00:08:42.197 "nvme_io": false, 00:08:42.197 "nvme_io_md": false, 00:08:42.197 "write_zeroes": true, 00:08:42.197 "zcopy": true, 00:08:42.197 "get_zone_info": false, 00:08:42.197 "zone_management": false, 00:08:42.197 "zone_append": false, 00:08:42.197 "compare": false, 00:08:42.197 "compare_and_write": false, 00:08:42.197 "abort": true, 00:08:42.197 "seek_hole": false, 00:08:42.197 "seek_data": false, 00:08:42.197 "copy": true, 00:08:42.197 "nvme_iov_md": false 00:08:42.197 }, 00:08:42.197 "memory_domains": [ 00:08:42.197 { 00:08:42.197 "dma_device_id": "system", 00:08:42.197 "dma_device_type": 1 00:08:42.197 }, 00:08:42.197 { 00:08:42.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.197 "dma_device_type": 2 00:08:42.197 } 00:08:42.197 ], 00:08:42.197 "driver_specific": {} 00:08:42.197 } 00:08:42.197 ] 00:08:42.197 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.197 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.198 "name": "Existed_Raid", 00:08:42.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.198 "strip_size_kb": 64, 00:08:42.198 "state": "configuring", 00:08:42.198 "raid_level": "concat", 00:08:42.198 "superblock": false, 00:08:42.198 "num_base_bdevs": 3, 00:08:42.198 "num_base_bdevs_discovered": 1, 00:08:42.198 "num_base_bdevs_operational": 3, 00:08:42.198 "base_bdevs_list": [ 00:08:42.198 { 00:08:42.198 "name": "BaseBdev1", 00:08:42.198 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:42.198 "is_configured": true, 00:08:42.198 "data_offset": 0, 00:08:42.198 "data_size": 65536 00:08:42.198 }, 00:08:42.198 { 00:08:42.198 "name": "BaseBdev2", 00:08:42.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.198 "is_configured": false, 00:08:42.198 "data_offset": 0, 00:08:42.198 "data_size": 0 00:08:42.198 }, 00:08:42.198 { 00:08:42.198 "name": "BaseBdev3", 00:08:42.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.198 "is_configured": false, 00:08:42.198 "data_offset": 0, 00:08:42.198 "data_size": 0 00:08:42.198 } 00:08:42.198 ] 00:08:42.198 }' 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.198 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.767 [2024-11-26 22:53:21.606331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.767 [2024-11-26 22:53:21.606382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.767 [2024-11-26 22:53:21.614367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.767 [2024-11-26 22:53:21.616160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.767 [2024-11-26 22:53:21.616203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.767 [2024-11-26 22:53:21.616217] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.767 [2024-11-26 22:53:21.616225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.767 "name": "Existed_Raid", 00:08:42.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.767 "strip_size_kb": 64, 00:08:42.767 "state": "configuring", 00:08:42.767 "raid_level": "concat", 00:08:42.767 "superblock": false, 00:08:42.767 "num_base_bdevs": 3, 00:08:42.767 "num_base_bdevs_discovered": 1, 00:08:42.767 "num_base_bdevs_operational": 3, 00:08:42.767 "base_bdevs_list": [ 00:08:42.767 { 00:08:42.767 "name": "BaseBdev1", 00:08:42.767 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:42.767 "is_configured": true, 00:08:42.767 "data_offset": 0, 00:08:42.767 "data_size": 65536 00:08:42.767 }, 00:08:42.767 { 00:08:42.767 "name": "BaseBdev2", 00:08:42.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.767 "is_configured": false, 00:08:42.767 "data_offset": 0, 00:08:42.767 "data_size": 0 00:08:42.767 }, 00:08:42.767 { 00:08:42.767 "name": "BaseBdev3", 00:08:42.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.767 "is_configured": false, 00:08:42.767 "data_offset": 0, 00:08:42.767 "data_size": 0 00:08:42.767 } 00:08:42.767 ] 00:08:42.767 }' 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.767 22:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.027 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.027 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.027 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.027 [2024-11-26 22:53:22.073453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.027 BaseBdev2 00:08:43.027 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.027 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 [ 00:08:43.028 { 00:08:43.028 "name": "BaseBdev2", 00:08:43.028 "aliases": [ 00:08:43.028 "1739eff5-1fae-4513-afaf-dd4f4ad8386d" 00:08:43.028 ], 00:08:43.028 "product_name": "Malloc disk", 00:08:43.028 "block_size": 512, 00:08:43.028 "num_blocks": 65536, 00:08:43.028 "uuid": "1739eff5-1fae-4513-afaf-dd4f4ad8386d", 00:08:43.028 "assigned_rate_limits": { 00:08:43.028 "rw_ios_per_sec": 0, 00:08:43.028 "rw_mbytes_per_sec": 0, 00:08:43.028 "r_mbytes_per_sec": 0, 00:08:43.028 "w_mbytes_per_sec": 0 00:08:43.028 }, 00:08:43.028 "claimed": true, 00:08:43.028 "claim_type": "exclusive_write", 00:08:43.028 "zoned": false, 00:08:43.028 "supported_io_types": { 00:08:43.028 "read": true, 00:08:43.028 "write": true, 00:08:43.028 "unmap": true, 00:08:43.028 "flush": true, 00:08:43.028 "reset": true, 00:08:43.028 "nvme_admin": false, 00:08:43.028 "nvme_io": false, 00:08:43.028 "nvme_io_md": false, 00:08:43.028 "write_zeroes": true, 00:08:43.028 "zcopy": true, 00:08:43.028 "get_zone_info": false, 00:08:43.028 "zone_management": false, 00:08:43.028 "zone_append": false, 00:08:43.028 "compare": false, 00:08:43.028 "compare_and_write": false, 00:08:43.028 "abort": true, 00:08:43.028 "seek_hole": false, 00:08:43.028 "seek_data": false, 00:08:43.028 "copy": true, 00:08:43.028 "nvme_iov_md": false 00:08:43.028 }, 00:08:43.028 "memory_domains": [ 00:08:43.028 { 00:08:43.028 "dma_device_id": "system", 00:08:43.028 "dma_device_type": 1 00:08:43.028 }, 00:08:43.028 { 00:08:43.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.028 "dma_device_type": 2 00:08:43.028 } 00:08:43.028 ], 00:08:43.028 "driver_specific": {} 00:08:43.028 } 00:08:43.028 ] 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.288 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.288 "name": "Existed_Raid", 00:08:43.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.288 "strip_size_kb": 64, 00:08:43.288 "state": "configuring", 00:08:43.288 "raid_level": "concat", 00:08:43.288 "superblock": false, 00:08:43.288 "num_base_bdevs": 3, 00:08:43.288 "num_base_bdevs_discovered": 2, 00:08:43.288 "num_base_bdevs_operational": 3, 00:08:43.288 "base_bdevs_list": [ 00:08:43.288 { 00:08:43.288 "name": "BaseBdev1", 00:08:43.288 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:43.288 "is_configured": true, 00:08:43.288 "data_offset": 0, 00:08:43.288 "data_size": 65536 00:08:43.288 }, 00:08:43.288 { 00:08:43.288 "name": "BaseBdev2", 00:08:43.288 "uuid": "1739eff5-1fae-4513-afaf-dd4f4ad8386d", 00:08:43.288 "is_configured": true, 00:08:43.288 "data_offset": 0, 00:08:43.288 "data_size": 65536 00:08:43.288 }, 00:08:43.288 { 00:08:43.288 "name": "BaseBdev3", 00:08:43.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.288 "is_configured": false, 00:08:43.288 "data_offset": 0, 00:08:43.288 "data_size": 0 00:08:43.288 } 00:08:43.288 ] 00:08:43.288 }' 00:08:43.288 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.288 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.549 [2024-11-26 22:53:22.497869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.549 [2024-11-26 22:53:22.497978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:43.549 [2024-11-26 22:53:22.498006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.549 [2024-11-26 22:53:22.499005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:43.549 [2024-11-26 22:53:22.499481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:43.549 [2024-11-26 22:53:22.499541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:43.549 [2024-11-26 22:53:22.500092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.549 BaseBdev3 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.549 [ 00:08:43.549 { 00:08:43.549 "name": "BaseBdev3", 00:08:43.549 "aliases": [ 00:08:43.549 "c9129b3d-e2bd-4412-8144-0699035b5840" 00:08:43.549 ], 00:08:43.549 "product_name": "Malloc disk", 00:08:43.549 "block_size": 512, 00:08:43.549 "num_blocks": 65536, 00:08:43.549 "uuid": "c9129b3d-e2bd-4412-8144-0699035b5840", 00:08:43.549 "assigned_rate_limits": { 00:08:43.549 "rw_ios_per_sec": 0, 00:08:43.549 "rw_mbytes_per_sec": 0, 00:08:43.549 "r_mbytes_per_sec": 0, 00:08:43.549 "w_mbytes_per_sec": 0 00:08:43.549 }, 00:08:43.549 "claimed": true, 00:08:43.549 "claim_type": "exclusive_write", 00:08:43.549 "zoned": false, 00:08:43.549 "supported_io_types": { 00:08:43.549 "read": true, 00:08:43.549 "write": true, 00:08:43.549 "unmap": true, 00:08:43.549 "flush": true, 00:08:43.549 "reset": true, 00:08:43.549 "nvme_admin": false, 00:08:43.549 "nvme_io": false, 00:08:43.549 "nvme_io_md": false, 00:08:43.549 "write_zeroes": true, 00:08:43.549 "zcopy": true, 00:08:43.549 "get_zone_info": false, 00:08:43.549 "zone_management": false, 00:08:43.549 "zone_append": false, 00:08:43.549 "compare": false, 00:08:43.549 "compare_and_write": false, 00:08:43.549 "abort": true, 00:08:43.549 "seek_hole": false, 00:08:43.549 "seek_data": false, 00:08:43.549 "copy": true, 00:08:43.549 "nvme_iov_md": false 00:08:43.549 }, 00:08:43.549 "memory_domains": [ 00:08:43.549 { 00:08:43.549 "dma_device_id": "system", 00:08:43.549 "dma_device_type": 1 00:08:43.549 }, 00:08:43.549 { 00:08:43.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.549 "dma_device_type": 2 00:08:43.549 } 00:08:43.549 ], 00:08:43.549 "driver_specific": {} 00:08:43.549 } 00:08:43.549 ] 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.549 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.550 "name": "Existed_Raid", 00:08:43.550 "uuid": "1c00e101-3a47-4dac-831e-f4377f56df21", 00:08:43.550 "strip_size_kb": 64, 00:08:43.550 "state": "online", 00:08:43.550 "raid_level": "concat", 00:08:43.550 "superblock": false, 00:08:43.550 "num_base_bdevs": 3, 00:08:43.550 "num_base_bdevs_discovered": 3, 00:08:43.550 "num_base_bdevs_operational": 3, 00:08:43.550 "base_bdevs_list": [ 00:08:43.550 { 00:08:43.550 "name": "BaseBdev1", 00:08:43.550 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:43.550 "is_configured": true, 00:08:43.550 "data_offset": 0, 00:08:43.550 "data_size": 65536 00:08:43.550 }, 00:08:43.550 { 00:08:43.550 "name": "BaseBdev2", 00:08:43.550 "uuid": "1739eff5-1fae-4513-afaf-dd4f4ad8386d", 00:08:43.550 "is_configured": true, 00:08:43.550 "data_offset": 0, 00:08:43.550 "data_size": 65536 00:08:43.550 }, 00:08:43.550 { 00:08:43.550 "name": "BaseBdev3", 00:08:43.550 "uuid": "c9129b3d-e2bd-4412-8144-0699035b5840", 00:08:43.550 "is_configured": true, 00:08:43.550 "data_offset": 0, 00:08:43.550 "data_size": 65536 00:08:43.550 } 00:08:43.550 ] 00:08:43.550 }' 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.550 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.120 [2024-11-26 22:53:22.970342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.120 "name": "Existed_Raid", 00:08:44.120 "aliases": [ 00:08:44.120 "1c00e101-3a47-4dac-831e-f4377f56df21" 00:08:44.120 ], 00:08:44.120 "product_name": "Raid Volume", 00:08:44.120 "block_size": 512, 00:08:44.120 "num_blocks": 196608, 00:08:44.120 "uuid": "1c00e101-3a47-4dac-831e-f4377f56df21", 00:08:44.120 "assigned_rate_limits": { 00:08:44.120 "rw_ios_per_sec": 0, 00:08:44.120 "rw_mbytes_per_sec": 0, 00:08:44.120 "r_mbytes_per_sec": 0, 00:08:44.120 "w_mbytes_per_sec": 0 00:08:44.120 }, 00:08:44.120 "claimed": false, 00:08:44.120 "zoned": false, 00:08:44.120 "supported_io_types": { 00:08:44.120 "read": true, 00:08:44.120 "write": true, 00:08:44.120 "unmap": true, 00:08:44.120 "flush": true, 00:08:44.120 "reset": true, 00:08:44.120 "nvme_admin": false, 00:08:44.120 "nvme_io": false, 00:08:44.120 "nvme_io_md": false, 00:08:44.120 "write_zeroes": true, 00:08:44.120 "zcopy": false, 00:08:44.120 "get_zone_info": false, 00:08:44.120 "zone_management": false, 00:08:44.120 "zone_append": false, 00:08:44.120 "compare": false, 00:08:44.120 "compare_and_write": false, 00:08:44.120 "abort": false, 00:08:44.120 "seek_hole": false, 00:08:44.120 "seek_data": false, 00:08:44.120 "copy": false, 00:08:44.120 "nvme_iov_md": false 00:08:44.120 }, 00:08:44.120 "memory_domains": [ 00:08:44.120 { 00:08:44.120 "dma_device_id": "system", 00:08:44.120 "dma_device_type": 1 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.120 "dma_device_type": 2 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "dma_device_id": "system", 00:08:44.120 "dma_device_type": 1 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.120 "dma_device_type": 2 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "dma_device_id": "system", 00:08:44.120 "dma_device_type": 1 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.120 "dma_device_type": 2 00:08:44.120 } 00:08:44.120 ], 00:08:44.120 "driver_specific": { 00:08:44.120 "raid": { 00:08:44.120 "uuid": "1c00e101-3a47-4dac-831e-f4377f56df21", 00:08:44.120 "strip_size_kb": 64, 00:08:44.120 "state": "online", 00:08:44.120 "raid_level": "concat", 00:08:44.120 "superblock": false, 00:08:44.120 "num_base_bdevs": 3, 00:08:44.120 "num_base_bdevs_discovered": 3, 00:08:44.120 "num_base_bdevs_operational": 3, 00:08:44.120 "base_bdevs_list": [ 00:08:44.120 { 00:08:44.120 "name": "BaseBdev1", 00:08:44.120 "uuid": "d6de1c8e-24b4-4a1d-9d2d-7d58b3f52940", 00:08:44.120 "is_configured": true, 00:08:44.120 "data_offset": 0, 00:08:44.120 "data_size": 65536 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "name": "BaseBdev2", 00:08:44.120 "uuid": "1739eff5-1fae-4513-afaf-dd4f4ad8386d", 00:08:44.120 "is_configured": true, 00:08:44.120 "data_offset": 0, 00:08:44.120 "data_size": 65536 00:08:44.120 }, 00:08:44.120 { 00:08:44.120 "name": "BaseBdev3", 00:08:44.120 "uuid": "c9129b3d-e2bd-4412-8144-0699035b5840", 00:08:44.120 "is_configured": true, 00:08:44.120 "data_offset": 0, 00:08:44.120 "data_size": 65536 00:08:44.120 } 00:08:44.120 ] 00:08:44.120 } 00:08:44.120 } 00:08:44.120 }' 00:08:44.120 22:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.120 BaseBdev2 00:08:44.120 BaseBdev3' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.120 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 [2024-11-26 22:53:23.222102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.121 [2024-11-26 22:53:23.222135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.121 [2024-11-26 22:53:23.222204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.121 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.380 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.380 "name": "Existed_Raid", 00:08:44.380 "uuid": "1c00e101-3a47-4dac-831e-f4377f56df21", 00:08:44.380 "strip_size_kb": 64, 00:08:44.380 "state": "offline", 00:08:44.380 "raid_level": "concat", 00:08:44.380 "superblock": false, 00:08:44.380 "num_base_bdevs": 3, 00:08:44.380 "num_base_bdevs_discovered": 2, 00:08:44.380 "num_base_bdevs_operational": 2, 00:08:44.380 "base_bdevs_list": [ 00:08:44.380 { 00:08:44.380 "name": null, 00:08:44.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.380 "is_configured": false, 00:08:44.381 "data_offset": 0, 00:08:44.381 "data_size": 65536 00:08:44.381 }, 00:08:44.381 { 00:08:44.381 "name": "BaseBdev2", 00:08:44.381 "uuid": "1739eff5-1fae-4513-afaf-dd4f4ad8386d", 00:08:44.381 "is_configured": true, 00:08:44.381 "data_offset": 0, 00:08:44.381 "data_size": 65536 00:08:44.381 }, 00:08:44.381 { 00:08:44.381 "name": "BaseBdev3", 00:08:44.381 "uuid": "c9129b3d-e2bd-4412-8144-0699035b5840", 00:08:44.381 "is_configured": true, 00:08:44.381 "data_offset": 0, 00:08:44.381 "data_size": 65536 00:08:44.381 } 00:08:44.381 ] 00:08:44.381 }' 00:08:44.381 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.381 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 [2024-11-26 22:53:23.669208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 [2024-11-26 22:53:23.732238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.641 [2024-11-26 22:53:23.732335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:44.641 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.902 BaseBdev2 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.902 [ 00:08:44.902 { 00:08:44.902 "name": "BaseBdev2", 00:08:44.902 "aliases": [ 00:08:44.902 "ce52888b-a841-4e81-bc86-1be48ff1ab44" 00:08:44.902 ], 00:08:44.902 "product_name": "Malloc disk", 00:08:44.902 "block_size": 512, 00:08:44.902 "num_blocks": 65536, 00:08:44.902 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:44.902 "assigned_rate_limits": { 00:08:44.902 "rw_ios_per_sec": 0, 00:08:44.902 "rw_mbytes_per_sec": 0, 00:08:44.902 "r_mbytes_per_sec": 0, 00:08:44.902 "w_mbytes_per_sec": 0 00:08:44.902 }, 00:08:44.902 "claimed": false, 00:08:44.902 "zoned": false, 00:08:44.902 "supported_io_types": { 00:08:44.902 "read": true, 00:08:44.902 "write": true, 00:08:44.902 "unmap": true, 00:08:44.902 "flush": true, 00:08:44.902 "reset": true, 00:08:44.902 "nvme_admin": false, 00:08:44.902 "nvme_io": false, 00:08:44.902 "nvme_io_md": false, 00:08:44.902 "write_zeroes": true, 00:08:44.902 "zcopy": true, 00:08:44.902 "get_zone_info": false, 00:08:44.902 "zone_management": false, 00:08:44.902 "zone_append": false, 00:08:44.902 "compare": false, 00:08:44.902 "compare_and_write": false, 00:08:44.902 "abort": true, 00:08:44.902 "seek_hole": false, 00:08:44.902 "seek_data": false, 00:08:44.902 "copy": true, 00:08:44.902 "nvme_iov_md": false 00:08:44.902 }, 00:08:44.902 "memory_domains": [ 00:08:44.902 { 00:08:44.902 "dma_device_id": "system", 00:08:44.902 "dma_device_type": 1 00:08:44.902 }, 00:08:44.902 { 00:08:44.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.902 "dma_device_type": 2 00:08:44.902 } 00:08:44.902 ], 00:08:44.902 "driver_specific": {} 00:08:44.902 } 00:08:44.902 ] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.902 BaseBdev3 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.902 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.903 [ 00:08:44.903 { 00:08:44.903 "name": "BaseBdev3", 00:08:44.903 "aliases": [ 00:08:44.903 "075c6e9b-2d4d-4d94-b839-b6b75393c40f" 00:08:44.903 ], 00:08:44.903 "product_name": "Malloc disk", 00:08:44.903 "block_size": 512, 00:08:44.903 "num_blocks": 65536, 00:08:44.903 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:44.903 "assigned_rate_limits": { 00:08:44.903 "rw_ios_per_sec": 0, 00:08:44.903 "rw_mbytes_per_sec": 0, 00:08:44.903 "r_mbytes_per_sec": 0, 00:08:44.903 "w_mbytes_per_sec": 0 00:08:44.903 }, 00:08:44.903 "claimed": false, 00:08:44.903 "zoned": false, 00:08:44.903 "supported_io_types": { 00:08:44.903 "read": true, 00:08:44.903 "write": true, 00:08:44.903 "unmap": true, 00:08:44.903 "flush": true, 00:08:44.903 "reset": true, 00:08:44.903 "nvme_admin": false, 00:08:44.903 "nvme_io": false, 00:08:44.903 "nvme_io_md": false, 00:08:44.903 "write_zeroes": true, 00:08:44.903 "zcopy": true, 00:08:44.903 "get_zone_info": false, 00:08:44.903 "zone_management": false, 00:08:44.903 "zone_append": false, 00:08:44.903 "compare": false, 00:08:44.903 "compare_and_write": false, 00:08:44.903 "abort": true, 00:08:44.903 "seek_hole": false, 00:08:44.903 "seek_data": false, 00:08:44.903 "copy": true, 00:08:44.903 "nvme_iov_md": false 00:08:44.903 }, 00:08:44.903 "memory_domains": [ 00:08:44.903 { 00:08:44.903 "dma_device_id": "system", 00:08:44.903 "dma_device_type": 1 00:08:44.903 }, 00:08:44.903 { 00:08:44.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.903 "dma_device_type": 2 00:08:44.903 } 00:08:44.903 ], 00:08:44.903 "driver_specific": {} 00:08:44.903 } 00:08:44.903 ] 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.903 [2024-11-26 22:53:23.896192] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.903 [2024-11-26 22:53:23.896243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.903 [2024-11-26 22:53:23.896274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.903 [2024-11-26 22:53:23.898022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.903 "name": "Existed_Raid", 00:08:44.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.903 "strip_size_kb": 64, 00:08:44.903 "state": "configuring", 00:08:44.903 "raid_level": "concat", 00:08:44.903 "superblock": false, 00:08:44.903 "num_base_bdevs": 3, 00:08:44.903 "num_base_bdevs_discovered": 2, 00:08:44.903 "num_base_bdevs_operational": 3, 00:08:44.903 "base_bdevs_list": [ 00:08:44.903 { 00:08:44.903 "name": "BaseBdev1", 00:08:44.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.903 "is_configured": false, 00:08:44.903 "data_offset": 0, 00:08:44.903 "data_size": 0 00:08:44.903 }, 00:08:44.903 { 00:08:44.903 "name": "BaseBdev2", 00:08:44.903 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:44.903 "is_configured": true, 00:08:44.903 "data_offset": 0, 00:08:44.903 "data_size": 65536 00:08:44.903 }, 00:08:44.903 { 00:08:44.903 "name": "BaseBdev3", 00:08:44.903 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:44.903 "is_configured": true, 00:08:44.903 "data_offset": 0, 00:08:44.903 "data_size": 65536 00:08:44.903 } 00:08:44.903 ] 00:08:44.903 }' 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.903 22:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.163 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:45.163 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.163 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.423 [2024-11-26 22:53:24.292329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.423 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.423 "name": "Existed_Raid", 00:08:45.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.423 "strip_size_kb": 64, 00:08:45.423 "state": "configuring", 00:08:45.424 "raid_level": "concat", 00:08:45.424 "superblock": false, 00:08:45.424 "num_base_bdevs": 3, 00:08:45.424 "num_base_bdevs_discovered": 1, 00:08:45.424 "num_base_bdevs_operational": 3, 00:08:45.424 "base_bdevs_list": [ 00:08:45.424 { 00:08:45.424 "name": "BaseBdev1", 00:08:45.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.424 "is_configured": false, 00:08:45.424 "data_offset": 0, 00:08:45.424 "data_size": 0 00:08:45.424 }, 00:08:45.424 { 00:08:45.424 "name": null, 00:08:45.424 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:45.424 "is_configured": false, 00:08:45.424 "data_offset": 0, 00:08:45.424 "data_size": 65536 00:08:45.424 }, 00:08:45.424 { 00:08:45.424 "name": "BaseBdev3", 00:08:45.424 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:45.424 "is_configured": true, 00:08:45.424 "data_offset": 0, 00:08:45.424 "data_size": 65536 00:08:45.424 } 00:08:45.424 ] 00:08:45.424 }' 00:08:45.424 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.424 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.730 [2024-11-26 22:53:24.811311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.730 BaseBdev1 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.730 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.991 [ 00:08:45.991 { 00:08:45.991 "name": "BaseBdev1", 00:08:45.991 "aliases": [ 00:08:45.991 "86660fb4-dc7c-46f7-ad2b-9ad4c642638b" 00:08:45.991 ], 00:08:45.991 "product_name": "Malloc disk", 00:08:45.991 "block_size": 512, 00:08:45.991 "num_blocks": 65536, 00:08:45.991 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:45.991 "assigned_rate_limits": { 00:08:45.991 "rw_ios_per_sec": 0, 00:08:45.991 "rw_mbytes_per_sec": 0, 00:08:45.991 "r_mbytes_per_sec": 0, 00:08:45.991 "w_mbytes_per_sec": 0 00:08:45.991 }, 00:08:45.991 "claimed": true, 00:08:45.991 "claim_type": "exclusive_write", 00:08:45.991 "zoned": false, 00:08:45.991 "supported_io_types": { 00:08:45.991 "read": true, 00:08:45.991 "write": true, 00:08:45.991 "unmap": true, 00:08:45.991 "flush": true, 00:08:45.991 "reset": true, 00:08:45.991 "nvme_admin": false, 00:08:45.991 "nvme_io": false, 00:08:45.991 "nvme_io_md": false, 00:08:45.991 "write_zeroes": true, 00:08:45.991 "zcopy": true, 00:08:45.991 "get_zone_info": false, 00:08:45.991 "zone_management": false, 00:08:45.991 "zone_append": false, 00:08:45.991 "compare": false, 00:08:45.991 "compare_and_write": false, 00:08:45.991 "abort": true, 00:08:45.991 "seek_hole": false, 00:08:45.991 "seek_data": false, 00:08:45.991 "copy": true, 00:08:45.991 "nvme_iov_md": false 00:08:45.991 }, 00:08:45.991 "memory_domains": [ 00:08:45.991 { 00:08:45.991 "dma_device_id": "system", 00:08:45.991 "dma_device_type": 1 00:08:45.991 }, 00:08:45.991 { 00:08:45.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.991 "dma_device_type": 2 00:08:45.991 } 00:08:45.991 ], 00:08:45.991 "driver_specific": {} 00:08:45.991 } 00:08:45.991 ] 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.991 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.991 "name": "Existed_Raid", 00:08:45.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.991 "strip_size_kb": 64, 00:08:45.991 "state": "configuring", 00:08:45.991 "raid_level": "concat", 00:08:45.991 "superblock": false, 00:08:45.991 "num_base_bdevs": 3, 00:08:45.991 "num_base_bdevs_discovered": 2, 00:08:45.991 "num_base_bdevs_operational": 3, 00:08:45.991 "base_bdevs_list": [ 00:08:45.991 { 00:08:45.991 "name": "BaseBdev1", 00:08:45.991 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:45.991 "is_configured": true, 00:08:45.991 "data_offset": 0, 00:08:45.991 "data_size": 65536 00:08:45.991 }, 00:08:45.991 { 00:08:45.991 "name": null, 00:08:45.991 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:45.992 "is_configured": false, 00:08:45.992 "data_offset": 0, 00:08:45.992 "data_size": 65536 00:08:45.992 }, 00:08:45.992 { 00:08:45.992 "name": "BaseBdev3", 00:08:45.992 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:45.992 "is_configured": true, 00:08:45.992 "data_offset": 0, 00:08:45.992 "data_size": 65536 00:08:45.992 } 00:08:45.992 ] 00:08:45.992 }' 00:08:45.992 22:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.992 22:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.252 [2024-11-26 22:53:25.311507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.252 "name": "Existed_Raid", 00:08:46.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.252 "strip_size_kb": 64, 00:08:46.252 "state": "configuring", 00:08:46.252 "raid_level": "concat", 00:08:46.252 "superblock": false, 00:08:46.252 "num_base_bdevs": 3, 00:08:46.252 "num_base_bdevs_discovered": 1, 00:08:46.252 "num_base_bdevs_operational": 3, 00:08:46.252 "base_bdevs_list": [ 00:08:46.252 { 00:08:46.252 "name": "BaseBdev1", 00:08:46.252 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:46.252 "is_configured": true, 00:08:46.252 "data_offset": 0, 00:08:46.252 "data_size": 65536 00:08:46.252 }, 00:08:46.252 { 00:08:46.252 "name": null, 00:08:46.252 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:46.252 "is_configured": false, 00:08:46.252 "data_offset": 0, 00:08:46.252 "data_size": 65536 00:08:46.252 }, 00:08:46.252 { 00:08:46.252 "name": null, 00:08:46.252 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:46.252 "is_configured": false, 00:08:46.252 "data_offset": 0, 00:08:46.252 "data_size": 65536 00:08:46.252 } 00:08:46.252 ] 00:08:46.252 }' 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.252 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.823 [2024-11-26 22:53:25.827698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.823 "name": "Existed_Raid", 00:08:46.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.823 "strip_size_kb": 64, 00:08:46.823 "state": "configuring", 00:08:46.823 "raid_level": "concat", 00:08:46.823 "superblock": false, 00:08:46.823 "num_base_bdevs": 3, 00:08:46.823 "num_base_bdevs_discovered": 2, 00:08:46.823 "num_base_bdevs_operational": 3, 00:08:46.823 "base_bdevs_list": [ 00:08:46.823 { 00:08:46.823 "name": "BaseBdev1", 00:08:46.823 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:46.823 "is_configured": true, 00:08:46.823 "data_offset": 0, 00:08:46.823 "data_size": 65536 00:08:46.823 }, 00:08:46.823 { 00:08:46.823 "name": null, 00:08:46.823 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:46.823 "is_configured": false, 00:08:46.823 "data_offset": 0, 00:08:46.823 "data_size": 65536 00:08:46.823 }, 00:08:46.823 { 00:08:46.823 "name": "BaseBdev3", 00:08:46.823 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:46.823 "is_configured": true, 00:08:46.823 "data_offset": 0, 00:08:46.823 "data_size": 65536 00:08:46.823 } 00:08:46.823 ] 00:08:46.823 }' 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.823 22:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.393 [2024-11-26 22:53:26.291842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.393 "name": "Existed_Raid", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.393 "strip_size_kb": 64, 00:08:47.393 "state": "configuring", 00:08:47.393 "raid_level": "concat", 00:08:47.393 "superblock": false, 00:08:47.393 "num_base_bdevs": 3, 00:08:47.393 "num_base_bdevs_discovered": 1, 00:08:47.393 "num_base_bdevs_operational": 3, 00:08:47.393 "base_bdevs_list": [ 00:08:47.393 { 00:08:47.393 "name": null, 00:08:47.393 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:47.393 "is_configured": false, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 65536 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": null, 00:08:47.393 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:47.393 "is_configured": false, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 65536 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": "BaseBdev3", 00:08:47.393 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:47.393 "is_configured": true, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 65536 00:08:47.393 } 00:08:47.393 ] 00:08:47.393 }' 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.393 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 [2024-11-26 22:53:26.694273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.653 "name": "Existed_Raid", 00:08:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.653 "strip_size_kb": 64, 00:08:47.653 "state": "configuring", 00:08:47.653 "raid_level": "concat", 00:08:47.653 "superblock": false, 00:08:47.653 "num_base_bdevs": 3, 00:08:47.653 "num_base_bdevs_discovered": 2, 00:08:47.653 "num_base_bdevs_operational": 3, 00:08:47.653 "base_bdevs_list": [ 00:08:47.653 { 00:08:47.653 "name": null, 00:08:47.653 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:47.653 "is_configured": false, 00:08:47.653 "data_offset": 0, 00:08:47.653 "data_size": 65536 00:08:47.653 }, 00:08:47.653 { 00:08:47.653 "name": "BaseBdev2", 00:08:47.653 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:47.653 "is_configured": true, 00:08:47.653 "data_offset": 0, 00:08:47.653 "data_size": 65536 00:08:47.653 }, 00:08:47.653 { 00:08:47.653 "name": "BaseBdev3", 00:08:47.653 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:47.653 "is_configured": true, 00:08:47.653 "data_offset": 0, 00:08:47.653 "data_size": 65536 00:08:47.653 } 00:08:47.653 ] 00:08:47.653 }' 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.653 22:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 86660fb4-dc7c-46f7-ad2b-9ad4c642638b 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 [2024-11-26 22:53:27.197232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:48.223 [2024-11-26 22:53:27.197289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.223 [2024-11-26 22:53:27.197298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:48.223 [2024-11-26 22:53:27.197561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:48.223 [2024-11-26 22:53:27.197682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.223 [2024-11-26 22:53:27.197700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.223 [2024-11-26 22:53:27.197866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.223 NewBaseBdev 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 [ 00:08:48.223 { 00:08:48.223 "name": "NewBaseBdev", 00:08:48.223 "aliases": [ 00:08:48.223 "86660fb4-dc7c-46f7-ad2b-9ad4c642638b" 00:08:48.223 ], 00:08:48.223 "product_name": "Malloc disk", 00:08:48.223 "block_size": 512, 00:08:48.223 "num_blocks": 65536, 00:08:48.223 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:48.223 "assigned_rate_limits": { 00:08:48.223 "rw_ios_per_sec": 0, 00:08:48.223 "rw_mbytes_per_sec": 0, 00:08:48.223 "r_mbytes_per_sec": 0, 00:08:48.223 "w_mbytes_per_sec": 0 00:08:48.223 }, 00:08:48.223 "claimed": true, 00:08:48.223 "claim_type": "exclusive_write", 00:08:48.223 "zoned": false, 00:08:48.223 "supported_io_types": { 00:08:48.223 "read": true, 00:08:48.223 "write": true, 00:08:48.223 "unmap": true, 00:08:48.223 "flush": true, 00:08:48.223 "reset": true, 00:08:48.223 "nvme_admin": false, 00:08:48.223 "nvme_io": false, 00:08:48.223 "nvme_io_md": false, 00:08:48.223 "write_zeroes": true, 00:08:48.223 "zcopy": true, 00:08:48.223 "get_zone_info": false, 00:08:48.223 "zone_management": false, 00:08:48.223 "zone_append": false, 00:08:48.223 "compare": false, 00:08:48.223 "compare_and_write": false, 00:08:48.223 "abort": true, 00:08:48.223 "seek_hole": false, 00:08:48.223 "seek_data": false, 00:08:48.223 "copy": true, 00:08:48.223 "nvme_iov_md": false 00:08:48.223 }, 00:08:48.223 "memory_domains": [ 00:08:48.223 { 00:08:48.223 "dma_device_id": "system", 00:08:48.223 "dma_device_type": 1 00:08:48.223 }, 00:08:48.223 { 00:08:48.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.223 "dma_device_type": 2 00:08:48.223 } 00:08:48.223 ], 00:08:48.223 "driver_specific": {} 00:08:48.223 } 00:08:48.223 ] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.223 "name": "Existed_Raid", 00:08:48.223 "uuid": "c700b857-aa94-4107-ad43-8cab0d573f17", 00:08:48.223 "strip_size_kb": 64, 00:08:48.223 "state": "online", 00:08:48.223 "raid_level": "concat", 00:08:48.223 "superblock": false, 00:08:48.223 "num_base_bdevs": 3, 00:08:48.223 "num_base_bdevs_discovered": 3, 00:08:48.223 "num_base_bdevs_operational": 3, 00:08:48.223 "base_bdevs_list": [ 00:08:48.223 { 00:08:48.223 "name": "NewBaseBdev", 00:08:48.223 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:48.223 "is_configured": true, 00:08:48.223 "data_offset": 0, 00:08:48.223 "data_size": 65536 00:08:48.223 }, 00:08:48.223 { 00:08:48.223 "name": "BaseBdev2", 00:08:48.223 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:48.223 "is_configured": true, 00:08:48.223 "data_offset": 0, 00:08:48.223 "data_size": 65536 00:08:48.223 }, 00:08:48.223 { 00:08:48.223 "name": "BaseBdev3", 00:08:48.223 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:48.223 "is_configured": true, 00:08:48.223 "data_offset": 0, 00:08:48.223 "data_size": 65536 00:08:48.223 } 00:08:48.223 ] 00:08:48.223 }' 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.223 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.482 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.742 [2024-11-26 22:53:27.609683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.742 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.742 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.742 "name": "Existed_Raid", 00:08:48.742 "aliases": [ 00:08:48.742 "c700b857-aa94-4107-ad43-8cab0d573f17" 00:08:48.742 ], 00:08:48.742 "product_name": "Raid Volume", 00:08:48.742 "block_size": 512, 00:08:48.742 "num_blocks": 196608, 00:08:48.742 "uuid": "c700b857-aa94-4107-ad43-8cab0d573f17", 00:08:48.742 "assigned_rate_limits": { 00:08:48.742 "rw_ios_per_sec": 0, 00:08:48.742 "rw_mbytes_per_sec": 0, 00:08:48.742 "r_mbytes_per_sec": 0, 00:08:48.742 "w_mbytes_per_sec": 0 00:08:48.742 }, 00:08:48.742 "claimed": false, 00:08:48.742 "zoned": false, 00:08:48.742 "supported_io_types": { 00:08:48.742 "read": true, 00:08:48.742 "write": true, 00:08:48.742 "unmap": true, 00:08:48.742 "flush": true, 00:08:48.742 "reset": true, 00:08:48.742 "nvme_admin": false, 00:08:48.742 "nvme_io": false, 00:08:48.742 "nvme_io_md": false, 00:08:48.742 "write_zeroes": true, 00:08:48.742 "zcopy": false, 00:08:48.742 "get_zone_info": false, 00:08:48.742 "zone_management": false, 00:08:48.742 "zone_append": false, 00:08:48.742 "compare": false, 00:08:48.742 "compare_and_write": false, 00:08:48.742 "abort": false, 00:08:48.742 "seek_hole": false, 00:08:48.742 "seek_data": false, 00:08:48.742 "copy": false, 00:08:48.742 "nvme_iov_md": false 00:08:48.742 }, 00:08:48.742 "memory_domains": [ 00:08:48.742 { 00:08:48.742 "dma_device_id": "system", 00:08:48.742 "dma_device_type": 1 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.742 "dma_device_type": 2 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "dma_device_id": "system", 00:08:48.742 "dma_device_type": 1 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.742 "dma_device_type": 2 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "dma_device_id": "system", 00:08:48.742 "dma_device_type": 1 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.742 "dma_device_type": 2 00:08:48.742 } 00:08:48.742 ], 00:08:48.742 "driver_specific": { 00:08:48.742 "raid": { 00:08:48.742 "uuid": "c700b857-aa94-4107-ad43-8cab0d573f17", 00:08:48.742 "strip_size_kb": 64, 00:08:48.742 "state": "online", 00:08:48.742 "raid_level": "concat", 00:08:48.742 "superblock": false, 00:08:48.742 "num_base_bdevs": 3, 00:08:48.742 "num_base_bdevs_discovered": 3, 00:08:48.742 "num_base_bdevs_operational": 3, 00:08:48.742 "base_bdevs_list": [ 00:08:48.742 { 00:08:48.742 "name": "NewBaseBdev", 00:08:48.742 "uuid": "86660fb4-dc7c-46f7-ad2b-9ad4c642638b", 00:08:48.742 "is_configured": true, 00:08:48.742 "data_offset": 0, 00:08:48.742 "data_size": 65536 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "name": "BaseBdev2", 00:08:48.742 "uuid": "ce52888b-a841-4e81-bc86-1be48ff1ab44", 00:08:48.742 "is_configured": true, 00:08:48.742 "data_offset": 0, 00:08:48.742 "data_size": 65536 00:08:48.742 }, 00:08:48.742 { 00:08:48.742 "name": "BaseBdev3", 00:08:48.742 "uuid": "075c6e9b-2d4d-4d94-b839-b6b75393c40f", 00:08:48.742 "is_configured": true, 00:08:48.742 "data_offset": 0, 00:08:48.742 "data_size": 65536 00:08:48.742 } 00:08:48.742 ] 00:08:48.742 } 00:08:48.742 } 00:08:48.742 }' 00:08:48.742 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.742 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:48.742 BaseBdev2 00:08:48.742 BaseBdev3' 00:08:48.742 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.743 [2024-11-26 22:53:27.849455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.743 [2024-11-26 22:53:27.849484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.743 [2024-11-26 22:53:27.849544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.743 [2024-11-26 22:53:27.849597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.743 [2024-11-26 22:53:27.849607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78350 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78350 ']' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78350 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.743 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78350 00:08:49.003 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.003 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.003 killing process with pid 78350 00:08:49.003 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78350' 00:08:49.003 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78350 00:08:49.003 [2024-11-26 22:53:27.886278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.003 22:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78350 00:08:49.003 [2024-11-26 22:53:27.917043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.263 00:08:49.263 real 0m8.467s 00:08:49.263 user 0m14.408s 00:08:49.263 sys 0m1.760s 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.263 ************************************ 00:08:49.263 END TEST raid_state_function_test 00:08:49.263 ************************************ 00:08:49.263 22:53:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:49.263 22:53:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.263 22:53:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.263 22:53:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.263 ************************************ 00:08:49.263 START TEST raid_state_function_test_sb 00:08:49.263 ************************************ 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78955 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:49.263 Process raid pid: 78955 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78955' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78955 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78955 ']' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.263 22:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.263 [2024-11-26 22:53:28.319439] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:49.263 [2024-11-26 22:53:28.319557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.522 [2024-11-26 22:53:28.454685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.522 [2024-11-26 22:53:28.490328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.522 [2024-11-26 22:53:28.515477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.522 [2024-11-26 22:53:28.558168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.522 [2024-11-26 22:53:28.558224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.090 [2024-11-26 22:53:29.150567] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.090 [2024-11-26 22:53:29.150618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.090 [2024-11-26 22:53:29.150631] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.090 [2024-11-26 22:53:29.150639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.090 [2024-11-26 22:53:29.150651] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.090 [2024-11-26 22:53:29.150659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.090 "name": "Existed_Raid", 00:08:50.090 "uuid": "f16071c8-580e-47c7-be73-617852347641", 00:08:50.090 "strip_size_kb": 64, 00:08:50.090 "state": "configuring", 00:08:50.090 "raid_level": "concat", 00:08:50.090 "superblock": true, 00:08:50.090 "num_base_bdevs": 3, 00:08:50.090 "num_base_bdevs_discovered": 0, 00:08:50.090 "num_base_bdevs_operational": 3, 00:08:50.090 "base_bdevs_list": [ 00:08:50.090 { 00:08:50.090 "name": "BaseBdev1", 00:08:50.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.090 "is_configured": false, 00:08:50.090 "data_offset": 0, 00:08:50.090 "data_size": 0 00:08:50.090 }, 00:08:50.090 { 00:08:50.090 "name": "BaseBdev2", 00:08:50.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.090 "is_configured": false, 00:08:50.090 "data_offset": 0, 00:08:50.090 "data_size": 0 00:08:50.090 }, 00:08:50.090 { 00:08:50.090 "name": "BaseBdev3", 00:08:50.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.090 "is_configured": false, 00:08:50.090 "data_offset": 0, 00:08:50.090 "data_size": 0 00:08:50.090 } 00:08:50.090 ] 00:08:50.090 }' 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.090 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.663 [2024-11-26 22:53:29.594612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.663 [2024-11-26 22:53:29.594651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.663 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 [2024-11-26 22:53:29.606645] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.664 [2024-11-26 22:53:29.606685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.664 [2024-11-26 22:53:29.606696] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.664 [2024-11-26 22:53:29.606703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.664 [2024-11-26 22:53:29.606712] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.664 [2024-11-26 22:53:29.606720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 [2024-11-26 22:53:29.627313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.664 BaseBdev1 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 [ 00:08:50.664 { 00:08:50.664 "name": "BaseBdev1", 00:08:50.664 "aliases": [ 00:08:50.664 "b24aef55-878e-47ab-a1d1-ec23171d488d" 00:08:50.664 ], 00:08:50.664 "product_name": "Malloc disk", 00:08:50.664 "block_size": 512, 00:08:50.664 "num_blocks": 65536, 00:08:50.664 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:50.664 "assigned_rate_limits": { 00:08:50.664 "rw_ios_per_sec": 0, 00:08:50.664 "rw_mbytes_per_sec": 0, 00:08:50.664 "r_mbytes_per_sec": 0, 00:08:50.664 "w_mbytes_per_sec": 0 00:08:50.664 }, 00:08:50.664 "claimed": true, 00:08:50.664 "claim_type": "exclusive_write", 00:08:50.664 "zoned": false, 00:08:50.664 "supported_io_types": { 00:08:50.664 "read": true, 00:08:50.664 "write": true, 00:08:50.664 "unmap": true, 00:08:50.664 "flush": true, 00:08:50.664 "reset": true, 00:08:50.664 "nvme_admin": false, 00:08:50.664 "nvme_io": false, 00:08:50.664 "nvme_io_md": false, 00:08:50.664 "write_zeroes": true, 00:08:50.664 "zcopy": true, 00:08:50.664 "get_zone_info": false, 00:08:50.664 "zone_management": false, 00:08:50.664 "zone_append": false, 00:08:50.664 "compare": false, 00:08:50.664 "compare_and_write": false, 00:08:50.664 "abort": true, 00:08:50.664 "seek_hole": false, 00:08:50.664 "seek_data": false, 00:08:50.664 "copy": true, 00:08:50.664 "nvme_iov_md": false 00:08:50.664 }, 00:08:50.664 "memory_domains": [ 00:08:50.664 { 00:08:50.664 "dma_device_id": "system", 00:08:50.664 "dma_device_type": 1 00:08:50.664 }, 00:08:50.664 { 00:08:50.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.664 "dma_device_type": 2 00:08:50.664 } 00:08:50.664 ], 00:08:50.664 "driver_specific": {} 00:08:50.664 } 00:08:50.664 ] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.664 "name": "Existed_Raid", 00:08:50.664 "uuid": "c8ae97a4-2b87-42a6-a93c-e56053def17b", 00:08:50.664 "strip_size_kb": 64, 00:08:50.664 "state": "configuring", 00:08:50.664 "raid_level": "concat", 00:08:50.664 "superblock": true, 00:08:50.664 "num_base_bdevs": 3, 00:08:50.664 "num_base_bdevs_discovered": 1, 00:08:50.664 "num_base_bdevs_operational": 3, 00:08:50.664 "base_bdevs_list": [ 00:08:50.664 { 00:08:50.664 "name": "BaseBdev1", 00:08:50.664 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:50.664 "is_configured": true, 00:08:50.664 "data_offset": 2048, 00:08:50.664 "data_size": 63488 00:08:50.664 }, 00:08:50.664 { 00:08:50.664 "name": "BaseBdev2", 00:08:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.664 "is_configured": false, 00:08:50.664 "data_offset": 0, 00:08:50.664 "data_size": 0 00:08:50.664 }, 00:08:50.664 { 00:08:50.664 "name": "BaseBdev3", 00:08:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.664 "is_configured": false, 00:08:50.664 "data_offset": 0, 00:08:50.664 "data_size": 0 00:08:50.664 } 00:08:50.664 ] 00:08:50.664 }' 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.664 22:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 [2024-11-26 22:53:30.119474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.234 [2024-11-26 22:53:30.119525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 [2024-11-26 22:53:30.127530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.234 [2024-11-26 22:53:30.129335] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.234 [2024-11-26 22:53:30.129384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.234 [2024-11-26 22:53:30.129398] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:51.234 [2024-11-26 22:53:30.129406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.234 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.234 "name": "Existed_Raid", 00:08:51.234 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:51.234 "strip_size_kb": 64, 00:08:51.234 "state": "configuring", 00:08:51.234 "raid_level": "concat", 00:08:51.234 "superblock": true, 00:08:51.234 "num_base_bdevs": 3, 00:08:51.234 "num_base_bdevs_discovered": 1, 00:08:51.234 "num_base_bdevs_operational": 3, 00:08:51.235 "base_bdevs_list": [ 00:08:51.235 { 00:08:51.235 "name": "BaseBdev1", 00:08:51.235 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:51.235 "is_configured": true, 00:08:51.235 "data_offset": 2048, 00:08:51.235 "data_size": 63488 00:08:51.235 }, 00:08:51.235 { 00:08:51.235 "name": "BaseBdev2", 00:08:51.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.235 "is_configured": false, 00:08:51.235 "data_offset": 0, 00:08:51.235 "data_size": 0 00:08:51.235 }, 00:08:51.235 { 00:08:51.235 "name": "BaseBdev3", 00:08:51.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.235 "is_configured": false, 00:08:51.235 "data_offset": 0, 00:08:51.235 "data_size": 0 00:08:51.235 } 00:08:51.235 ] 00:08:51.235 }' 00:08:51.235 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.235 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.494 [2024-11-26 22:53:30.558399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.494 BaseBdev2 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:51.494 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.495 [ 00:08:51.495 { 00:08:51.495 "name": "BaseBdev2", 00:08:51.495 "aliases": [ 00:08:51.495 "8fa72f6a-145c-4336-8b0c-5aa52a8830da" 00:08:51.495 ], 00:08:51.495 "product_name": "Malloc disk", 00:08:51.495 "block_size": 512, 00:08:51.495 "num_blocks": 65536, 00:08:51.495 "uuid": "8fa72f6a-145c-4336-8b0c-5aa52a8830da", 00:08:51.495 "assigned_rate_limits": { 00:08:51.495 "rw_ios_per_sec": 0, 00:08:51.495 "rw_mbytes_per_sec": 0, 00:08:51.495 "r_mbytes_per_sec": 0, 00:08:51.495 "w_mbytes_per_sec": 0 00:08:51.495 }, 00:08:51.495 "claimed": true, 00:08:51.495 "claim_type": "exclusive_write", 00:08:51.495 "zoned": false, 00:08:51.495 "supported_io_types": { 00:08:51.495 "read": true, 00:08:51.495 "write": true, 00:08:51.495 "unmap": true, 00:08:51.495 "flush": true, 00:08:51.495 "reset": true, 00:08:51.495 "nvme_admin": false, 00:08:51.495 "nvme_io": false, 00:08:51.495 "nvme_io_md": false, 00:08:51.495 "write_zeroes": true, 00:08:51.495 "zcopy": true, 00:08:51.495 "get_zone_info": false, 00:08:51.495 "zone_management": false, 00:08:51.495 "zone_append": false, 00:08:51.495 "compare": false, 00:08:51.495 "compare_and_write": false, 00:08:51.495 "abort": true, 00:08:51.495 "seek_hole": false, 00:08:51.495 "seek_data": false, 00:08:51.495 "copy": true, 00:08:51.495 "nvme_iov_md": false 00:08:51.495 }, 00:08:51.495 "memory_domains": [ 00:08:51.495 { 00:08:51.495 "dma_device_id": "system", 00:08:51.495 "dma_device_type": 1 00:08:51.495 }, 00:08:51.495 { 00:08:51.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.495 "dma_device_type": 2 00:08:51.495 } 00:08:51.495 ], 00:08:51.495 "driver_specific": {} 00:08:51.495 } 00:08:51.495 ] 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.495 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.755 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.755 "name": "Existed_Raid", 00:08:51.755 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:51.755 "strip_size_kb": 64, 00:08:51.755 "state": "configuring", 00:08:51.755 "raid_level": "concat", 00:08:51.755 "superblock": true, 00:08:51.755 "num_base_bdevs": 3, 00:08:51.755 "num_base_bdevs_discovered": 2, 00:08:51.755 "num_base_bdevs_operational": 3, 00:08:51.755 "base_bdevs_list": [ 00:08:51.755 { 00:08:51.755 "name": "BaseBdev1", 00:08:51.755 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:51.755 "is_configured": true, 00:08:51.755 "data_offset": 2048, 00:08:51.755 "data_size": 63488 00:08:51.755 }, 00:08:51.755 { 00:08:51.755 "name": "BaseBdev2", 00:08:51.755 "uuid": "8fa72f6a-145c-4336-8b0c-5aa52a8830da", 00:08:51.755 "is_configured": true, 00:08:51.755 "data_offset": 2048, 00:08:51.755 "data_size": 63488 00:08:51.755 }, 00:08:51.755 { 00:08:51.755 "name": "BaseBdev3", 00:08:51.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.755 "is_configured": false, 00:08:51.755 "data_offset": 0, 00:08:51.755 "data_size": 0 00:08:51.755 } 00:08:51.755 ] 00:08:51.755 }' 00:08:51.755 22:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.755 22:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.015 [2024-11-26 22:53:31.072397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.015 [2024-11-26 22:53:31.072588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:52.015 [2024-11-26 22:53:31.072602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.015 [2024-11-26 22:53:31.072986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.015 [2024-11-26 22:53:31.073154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:52.015 [2024-11-26 22:53:31.073185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:52.015 BaseBdev3 00:08:52.015 [2024-11-26 22:53:31.073330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.015 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.015 [ 00:08:52.015 { 00:08:52.015 "name": "BaseBdev3", 00:08:52.015 "aliases": [ 00:08:52.015 "42ea2e4a-4fa2-4d2a-b589-e6a1b46d9948" 00:08:52.015 ], 00:08:52.015 "product_name": "Malloc disk", 00:08:52.015 "block_size": 512, 00:08:52.015 "num_blocks": 65536, 00:08:52.015 "uuid": "42ea2e4a-4fa2-4d2a-b589-e6a1b46d9948", 00:08:52.015 "assigned_rate_limits": { 00:08:52.015 "rw_ios_per_sec": 0, 00:08:52.015 "rw_mbytes_per_sec": 0, 00:08:52.015 "r_mbytes_per_sec": 0, 00:08:52.015 "w_mbytes_per_sec": 0 00:08:52.015 }, 00:08:52.015 "claimed": true, 00:08:52.015 "claim_type": "exclusive_write", 00:08:52.015 "zoned": false, 00:08:52.015 "supported_io_types": { 00:08:52.015 "read": true, 00:08:52.015 "write": true, 00:08:52.015 "unmap": true, 00:08:52.015 "flush": true, 00:08:52.015 "reset": true, 00:08:52.015 "nvme_admin": false, 00:08:52.015 "nvme_io": false, 00:08:52.015 "nvme_io_md": false, 00:08:52.015 "write_zeroes": true, 00:08:52.015 "zcopy": true, 00:08:52.015 "get_zone_info": false, 00:08:52.015 "zone_management": false, 00:08:52.015 "zone_append": false, 00:08:52.015 "compare": false, 00:08:52.015 "compare_and_write": false, 00:08:52.015 "abort": true, 00:08:52.015 "seek_hole": false, 00:08:52.015 "seek_data": false, 00:08:52.015 "copy": true, 00:08:52.015 "nvme_iov_md": false 00:08:52.015 }, 00:08:52.015 "memory_domains": [ 00:08:52.015 { 00:08:52.015 "dma_device_id": "system", 00:08:52.015 "dma_device_type": 1 00:08:52.015 }, 00:08:52.015 { 00:08:52.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.016 "dma_device_type": 2 00:08:52.016 } 00:08:52.016 ], 00:08:52.016 "driver_specific": {} 00:08:52.016 } 00:08:52.016 ] 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.016 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.275 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.275 "name": "Existed_Raid", 00:08:52.275 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:52.275 "strip_size_kb": 64, 00:08:52.275 "state": "online", 00:08:52.275 "raid_level": "concat", 00:08:52.275 "superblock": true, 00:08:52.275 "num_base_bdevs": 3, 00:08:52.275 "num_base_bdevs_discovered": 3, 00:08:52.275 "num_base_bdevs_operational": 3, 00:08:52.275 "base_bdevs_list": [ 00:08:52.275 { 00:08:52.275 "name": "BaseBdev1", 00:08:52.275 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:52.275 "is_configured": true, 00:08:52.275 "data_offset": 2048, 00:08:52.275 "data_size": 63488 00:08:52.275 }, 00:08:52.275 { 00:08:52.275 "name": "BaseBdev2", 00:08:52.275 "uuid": "8fa72f6a-145c-4336-8b0c-5aa52a8830da", 00:08:52.275 "is_configured": true, 00:08:52.275 "data_offset": 2048, 00:08:52.275 "data_size": 63488 00:08:52.275 }, 00:08:52.275 { 00:08:52.275 "name": "BaseBdev3", 00:08:52.275 "uuid": "42ea2e4a-4fa2-4d2a-b589-e6a1b46d9948", 00:08:52.275 "is_configured": true, 00:08:52.275 "data_offset": 2048, 00:08:52.275 "data_size": 63488 00:08:52.275 } 00:08:52.275 ] 00:08:52.275 }' 00:08:52.275 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.275 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.535 [2024-11-26 22:53:31.560855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.535 "name": "Existed_Raid", 00:08:52.535 "aliases": [ 00:08:52.535 "f1a1aade-c235-4f33-a753-648a5be8d009" 00:08:52.535 ], 00:08:52.535 "product_name": "Raid Volume", 00:08:52.535 "block_size": 512, 00:08:52.535 "num_blocks": 190464, 00:08:52.535 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:52.535 "assigned_rate_limits": { 00:08:52.535 "rw_ios_per_sec": 0, 00:08:52.535 "rw_mbytes_per_sec": 0, 00:08:52.535 "r_mbytes_per_sec": 0, 00:08:52.535 "w_mbytes_per_sec": 0 00:08:52.535 }, 00:08:52.535 "claimed": false, 00:08:52.535 "zoned": false, 00:08:52.535 "supported_io_types": { 00:08:52.535 "read": true, 00:08:52.535 "write": true, 00:08:52.535 "unmap": true, 00:08:52.535 "flush": true, 00:08:52.535 "reset": true, 00:08:52.535 "nvme_admin": false, 00:08:52.535 "nvme_io": false, 00:08:52.535 "nvme_io_md": false, 00:08:52.535 "write_zeroes": true, 00:08:52.535 "zcopy": false, 00:08:52.535 "get_zone_info": false, 00:08:52.535 "zone_management": false, 00:08:52.535 "zone_append": false, 00:08:52.535 "compare": false, 00:08:52.535 "compare_and_write": false, 00:08:52.535 "abort": false, 00:08:52.535 "seek_hole": false, 00:08:52.535 "seek_data": false, 00:08:52.535 "copy": false, 00:08:52.535 "nvme_iov_md": false 00:08:52.535 }, 00:08:52.535 "memory_domains": [ 00:08:52.535 { 00:08:52.535 "dma_device_id": "system", 00:08:52.535 "dma_device_type": 1 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.535 "dma_device_type": 2 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "dma_device_id": "system", 00:08:52.535 "dma_device_type": 1 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.535 "dma_device_type": 2 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "dma_device_id": "system", 00:08:52.535 "dma_device_type": 1 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.535 "dma_device_type": 2 00:08:52.535 } 00:08:52.535 ], 00:08:52.535 "driver_specific": { 00:08:52.535 "raid": { 00:08:52.535 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:52.535 "strip_size_kb": 64, 00:08:52.535 "state": "online", 00:08:52.535 "raid_level": "concat", 00:08:52.535 "superblock": true, 00:08:52.535 "num_base_bdevs": 3, 00:08:52.535 "num_base_bdevs_discovered": 3, 00:08:52.535 "num_base_bdevs_operational": 3, 00:08:52.535 "base_bdevs_list": [ 00:08:52.535 { 00:08:52.535 "name": "BaseBdev1", 00:08:52.535 "uuid": "b24aef55-878e-47ab-a1d1-ec23171d488d", 00:08:52.535 "is_configured": true, 00:08:52.535 "data_offset": 2048, 00:08:52.535 "data_size": 63488 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "name": "BaseBdev2", 00:08:52.535 "uuid": "8fa72f6a-145c-4336-8b0c-5aa52a8830da", 00:08:52.535 "is_configured": true, 00:08:52.535 "data_offset": 2048, 00:08:52.535 "data_size": 63488 00:08:52.535 }, 00:08:52.535 { 00:08:52.535 "name": "BaseBdev3", 00:08:52.535 "uuid": "42ea2e4a-4fa2-4d2a-b589-e6a1b46d9948", 00:08:52.535 "is_configured": true, 00:08:52.535 "data_offset": 2048, 00:08:52.535 "data_size": 63488 00:08:52.535 } 00:08:52.535 ] 00:08:52.535 } 00:08:52.535 } 00:08:52.535 }' 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:52.535 BaseBdev2 00:08:52.535 BaseBdev3' 00:08:52.535 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.795 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.796 [2024-11-26 22:53:31.860719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.796 [2024-11-26 22:53:31.860754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.796 [2024-11-26 22:53:31.860816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.796 "name": "Existed_Raid", 00:08:52.796 "uuid": "f1a1aade-c235-4f33-a753-648a5be8d009", 00:08:52.796 "strip_size_kb": 64, 00:08:52.796 "state": "offline", 00:08:52.796 "raid_level": "concat", 00:08:52.796 "superblock": true, 00:08:52.796 "num_base_bdevs": 3, 00:08:52.796 "num_base_bdevs_discovered": 2, 00:08:52.796 "num_base_bdevs_operational": 2, 00:08:52.796 "base_bdevs_list": [ 00:08:52.796 { 00:08:52.796 "name": null, 00:08:52.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.796 "is_configured": false, 00:08:52.796 "data_offset": 0, 00:08:52.796 "data_size": 63488 00:08:52.796 }, 00:08:52.796 { 00:08:52.796 "name": "BaseBdev2", 00:08:52.796 "uuid": "8fa72f6a-145c-4336-8b0c-5aa52a8830da", 00:08:52.796 "is_configured": true, 00:08:52.796 "data_offset": 2048, 00:08:52.796 "data_size": 63488 00:08:52.796 }, 00:08:52.796 { 00:08:52.796 "name": "BaseBdev3", 00:08:52.796 "uuid": "42ea2e4a-4fa2-4d2a-b589-e6a1b46d9948", 00:08:52.796 "is_configured": true, 00:08:52.796 "data_offset": 2048, 00:08:52.796 "data_size": 63488 00:08:52.796 } 00:08:52.796 ] 00:08:52.796 }' 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.796 22:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 [2024-11-26 22:53:32.380021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 [2024-11-26 22:53:32.447407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.366 [2024-11-26 22:53:32.447461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.366 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.625 BaseBdev2 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.625 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 [ 00:08:53.626 { 00:08:53.626 "name": "BaseBdev2", 00:08:53.626 "aliases": [ 00:08:53.626 "41d052df-8833-4b50-94b1-01fba0ae1fe3" 00:08:53.626 ], 00:08:53.626 "product_name": "Malloc disk", 00:08:53.626 "block_size": 512, 00:08:53.626 "num_blocks": 65536, 00:08:53.626 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:53.626 "assigned_rate_limits": { 00:08:53.626 "rw_ios_per_sec": 0, 00:08:53.626 "rw_mbytes_per_sec": 0, 00:08:53.626 "r_mbytes_per_sec": 0, 00:08:53.626 "w_mbytes_per_sec": 0 00:08:53.626 }, 00:08:53.626 "claimed": false, 00:08:53.626 "zoned": false, 00:08:53.626 "supported_io_types": { 00:08:53.626 "read": true, 00:08:53.626 "write": true, 00:08:53.626 "unmap": true, 00:08:53.626 "flush": true, 00:08:53.626 "reset": true, 00:08:53.626 "nvme_admin": false, 00:08:53.626 "nvme_io": false, 00:08:53.626 "nvme_io_md": false, 00:08:53.626 "write_zeroes": true, 00:08:53.626 "zcopy": true, 00:08:53.626 "get_zone_info": false, 00:08:53.626 "zone_management": false, 00:08:53.626 "zone_append": false, 00:08:53.626 "compare": false, 00:08:53.626 "compare_and_write": false, 00:08:53.626 "abort": true, 00:08:53.626 "seek_hole": false, 00:08:53.626 "seek_data": false, 00:08:53.626 "copy": true, 00:08:53.626 "nvme_iov_md": false 00:08:53.626 }, 00:08:53.626 "memory_domains": [ 00:08:53.626 { 00:08:53.626 "dma_device_id": "system", 00:08:53.626 "dma_device_type": 1 00:08:53.626 }, 00:08:53.626 { 00:08:53.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.626 "dma_device_type": 2 00:08:53.626 } 00:08:53.626 ], 00:08:53.626 "driver_specific": {} 00:08:53.626 } 00:08:53.626 ] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 BaseBdev3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 [ 00:08:53.626 { 00:08:53.626 "name": "BaseBdev3", 00:08:53.626 "aliases": [ 00:08:53.626 "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1" 00:08:53.626 ], 00:08:53.626 "product_name": "Malloc disk", 00:08:53.626 "block_size": 512, 00:08:53.626 "num_blocks": 65536, 00:08:53.626 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:53.626 "assigned_rate_limits": { 00:08:53.626 "rw_ios_per_sec": 0, 00:08:53.626 "rw_mbytes_per_sec": 0, 00:08:53.626 "r_mbytes_per_sec": 0, 00:08:53.626 "w_mbytes_per_sec": 0 00:08:53.626 }, 00:08:53.626 "claimed": false, 00:08:53.626 "zoned": false, 00:08:53.626 "supported_io_types": { 00:08:53.626 "read": true, 00:08:53.626 "write": true, 00:08:53.626 "unmap": true, 00:08:53.626 "flush": true, 00:08:53.626 "reset": true, 00:08:53.626 "nvme_admin": false, 00:08:53.626 "nvme_io": false, 00:08:53.626 "nvme_io_md": false, 00:08:53.626 "write_zeroes": true, 00:08:53.626 "zcopy": true, 00:08:53.626 "get_zone_info": false, 00:08:53.626 "zone_management": false, 00:08:53.626 "zone_append": false, 00:08:53.626 "compare": false, 00:08:53.626 "compare_and_write": false, 00:08:53.626 "abort": true, 00:08:53.626 "seek_hole": false, 00:08:53.626 "seek_data": false, 00:08:53.626 "copy": true, 00:08:53.626 "nvme_iov_md": false 00:08:53.626 }, 00:08:53.626 "memory_domains": [ 00:08:53.626 { 00:08:53.626 "dma_device_id": "system", 00:08:53.626 "dma_device_type": 1 00:08:53.626 }, 00:08:53.626 { 00:08:53.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.626 "dma_device_type": 2 00:08:53.626 } 00:08:53.626 ], 00:08:53.626 "driver_specific": {} 00:08:53.626 } 00:08:53.626 ] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 [2024-11-26 22:53:32.614495] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.626 [2024-11-26 22:53:32.614543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.626 [2024-11-26 22:53:32.614562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.626 [2024-11-26 22:53:32.616373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.626 "name": "Existed_Raid", 00:08:53.626 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:53.626 "strip_size_kb": 64, 00:08:53.626 "state": "configuring", 00:08:53.626 "raid_level": "concat", 00:08:53.626 "superblock": true, 00:08:53.626 "num_base_bdevs": 3, 00:08:53.626 "num_base_bdevs_discovered": 2, 00:08:53.626 "num_base_bdevs_operational": 3, 00:08:53.626 "base_bdevs_list": [ 00:08:53.626 { 00:08:53.626 "name": "BaseBdev1", 00:08:53.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.626 "is_configured": false, 00:08:53.626 "data_offset": 0, 00:08:53.626 "data_size": 0 00:08:53.626 }, 00:08:53.626 { 00:08:53.626 "name": "BaseBdev2", 00:08:53.626 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:53.626 "is_configured": true, 00:08:53.626 "data_offset": 2048, 00:08:53.626 "data_size": 63488 00:08:53.626 }, 00:08:53.626 { 00:08:53.626 "name": "BaseBdev3", 00:08:53.626 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:53.626 "is_configured": true, 00:08:53.626 "data_offset": 2048, 00:08:53.626 "data_size": 63488 00:08:53.626 } 00:08:53.626 ] 00:08:53.626 }' 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.626 22:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.196 [2024-11-26 22:53:33.054619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.196 "name": "Existed_Raid", 00:08:54.196 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:54.196 "strip_size_kb": 64, 00:08:54.196 "state": "configuring", 00:08:54.196 "raid_level": "concat", 00:08:54.196 "superblock": true, 00:08:54.196 "num_base_bdevs": 3, 00:08:54.196 "num_base_bdevs_discovered": 1, 00:08:54.196 "num_base_bdevs_operational": 3, 00:08:54.196 "base_bdevs_list": [ 00:08:54.196 { 00:08:54.196 "name": "BaseBdev1", 00:08:54.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.196 "is_configured": false, 00:08:54.196 "data_offset": 0, 00:08:54.196 "data_size": 0 00:08:54.196 }, 00:08:54.196 { 00:08:54.196 "name": null, 00:08:54.196 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:54.196 "is_configured": false, 00:08:54.196 "data_offset": 0, 00:08:54.196 "data_size": 63488 00:08:54.196 }, 00:08:54.196 { 00:08:54.196 "name": "BaseBdev3", 00:08:54.196 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:54.196 "is_configured": true, 00:08:54.196 "data_offset": 2048, 00:08:54.196 "data_size": 63488 00:08:54.196 } 00:08:54.196 ] 00:08:54.196 }' 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.196 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 [2024-11-26 22:53:33.501481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.456 BaseBdev1 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 [ 00:08:54.456 { 00:08:54.456 "name": "BaseBdev1", 00:08:54.456 "aliases": [ 00:08:54.456 "dbc556ac-f6a6-4304-94a5-1568d5ed629c" 00:08:54.456 ], 00:08:54.456 "product_name": "Malloc disk", 00:08:54.456 "block_size": 512, 00:08:54.456 "num_blocks": 65536, 00:08:54.456 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:54.456 "assigned_rate_limits": { 00:08:54.456 "rw_ios_per_sec": 0, 00:08:54.456 "rw_mbytes_per_sec": 0, 00:08:54.456 "r_mbytes_per_sec": 0, 00:08:54.456 "w_mbytes_per_sec": 0 00:08:54.456 }, 00:08:54.456 "claimed": true, 00:08:54.456 "claim_type": "exclusive_write", 00:08:54.456 "zoned": false, 00:08:54.456 "supported_io_types": { 00:08:54.456 "read": true, 00:08:54.456 "write": true, 00:08:54.456 "unmap": true, 00:08:54.456 "flush": true, 00:08:54.456 "reset": true, 00:08:54.456 "nvme_admin": false, 00:08:54.456 "nvme_io": false, 00:08:54.456 "nvme_io_md": false, 00:08:54.456 "write_zeroes": true, 00:08:54.456 "zcopy": true, 00:08:54.456 "get_zone_info": false, 00:08:54.456 "zone_management": false, 00:08:54.456 "zone_append": false, 00:08:54.456 "compare": false, 00:08:54.456 "compare_and_write": false, 00:08:54.456 "abort": true, 00:08:54.456 "seek_hole": false, 00:08:54.456 "seek_data": false, 00:08:54.456 "copy": true, 00:08:54.456 "nvme_iov_md": false 00:08:54.456 }, 00:08:54.456 "memory_domains": [ 00:08:54.456 { 00:08:54.456 "dma_device_id": "system", 00:08:54.456 "dma_device_type": 1 00:08:54.456 }, 00:08:54.456 { 00:08:54.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.456 "dma_device_type": 2 00:08:54.456 } 00:08:54.456 ], 00:08:54.456 "driver_specific": {} 00:08:54.456 } 00:08:54.456 ] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.456 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.457 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.716 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.716 "name": "Existed_Raid", 00:08:54.716 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:54.716 "strip_size_kb": 64, 00:08:54.716 "state": "configuring", 00:08:54.716 "raid_level": "concat", 00:08:54.716 "superblock": true, 00:08:54.716 "num_base_bdevs": 3, 00:08:54.716 "num_base_bdevs_discovered": 2, 00:08:54.716 "num_base_bdevs_operational": 3, 00:08:54.716 "base_bdevs_list": [ 00:08:54.716 { 00:08:54.716 "name": "BaseBdev1", 00:08:54.716 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:54.716 "is_configured": true, 00:08:54.716 "data_offset": 2048, 00:08:54.716 "data_size": 63488 00:08:54.716 }, 00:08:54.716 { 00:08:54.716 "name": null, 00:08:54.716 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:54.716 "is_configured": false, 00:08:54.716 "data_offset": 0, 00:08:54.716 "data_size": 63488 00:08:54.716 }, 00:08:54.716 { 00:08:54.716 "name": "BaseBdev3", 00:08:54.716 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:54.716 "is_configured": true, 00:08:54.716 "data_offset": 2048, 00:08:54.716 "data_size": 63488 00:08:54.716 } 00:08:54.716 ] 00:08:54.716 }' 00:08:54.716 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.716 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.976 [2024-11-26 22:53:33.981664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.976 22:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.976 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.976 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.976 "name": "Existed_Raid", 00:08:54.976 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:54.976 "strip_size_kb": 64, 00:08:54.976 "state": "configuring", 00:08:54.976 "raid_level": "concat", 00:08:54.976 "superblock": true, 00:08:54.976 "num_base_bdevs": 3, 00:08:54.976 "num_base_bdevs_discovered": 1, 00:08:54.976 "num_base_bdevs_operational": 3, 00:08:54.976 "base_bdevs_list": [ 00:08:54.976 { 00:08:54.976 "name": "BaseBdev1", 00:08:54.976 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:54.976 "is_configured": true, 00:08:54.976 "data_offset": 2048, 00:08:54.976 "data_size": 63488 00:08:54.976 }, 00:08:54.976 { 00:08:54.976 "name": null, 00:08:54.976 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:54.976 "is_configured": false, 00:08:54.976 "data_offset": 0, 00:08:54.976 "data_size": 63488 00:08:54.976 }, 00:08:54.976 { 00:08:54.976 "name": null, 00:08:54.976 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:54.976 "is_configured": false, 00:08:54.976 "data_offset": 0, 00:08:54.976 "data_size": 63488 00:08:54.976 } 00:08:54.976 ] 00:08:54.976 }' 00:08:54.976 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.976 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.546 [2024-11-26 22:53:34.473845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.546 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.547 "name": "Existed_Raid", 00:08:55.547 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:55.547 "strip_size_kb": 64, 00:08:55.547 "state": "configuring", 00:08:55.547 "raid_level": "concat", 00:08:55.547 "superblock": true, 00:08:55.547 "num_base_bdevs": 3, 00:08:55.547 "num_base_bdevs_discovered": 2, 00:08:55.547 "num_base_bdevs_operational": 3, 00:08:55.547 "base_bdevs_list": [ 00:08:55.547 { 00:08:55.547 "name": "BaseBdev1", 00:08:55.547 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:55.547 "is_configured": true, 00:08:55.547 "data_offset": 2048, 00:08:55.547 "data_size": 63488 00:08:55.547 }, 00:08:55.547 { 00:08:55.547 "name": null, 00:08:55.547 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:55.547 "is_configured": false, 00:08:55.547 "data_offset": 0, 00:08:55.547 "data_size": 63488 00:08:55.547 }, 00:08:55.547 { 00:08:55.547 "name": "BaseBdev3", 00:08:55.547 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:55.547 "is_configured": true, 00:08:55.547 "data_offset": 2048, 00:08:55.547 "data_size": 63488 00:08:55.547 } 00:08:55.547 ] 00:08:55.547 }' 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.547 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.116 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:56.116 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.116 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.116 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.117 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.117 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:56.117 22:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.117 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.117 22:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.117 [2024-11-26 22:53:35.002019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.117 "name": "Existed_Raid", 00:08:56.117 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:56.117 "strip_size_kb": 64, 00:08:56.117 "state": "configuring", 00:08:56.117 "raid_level": "concat", 00:08:56.117 "superblock": true, 00:08:56.117 "num_base_bdevs": 3, 00:08:56.117 "num_base_bdevs_discovered": 1, 00:08:56.117 "num_base_bdevs_operational": 3, 00:08:56.117 "base_bdevs_list": [ 00:08:56.117 { 00:08:56.117 "name": null, 00:08:56.117 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:56.117 "is_configured": false, 00:08:56.117 "data_offset": 0, 00:08:56.117 "data_size": 63488 00:08:56.117 }, 00:08:56.117 { 00:08:56.117 "name": null, 00:08:56.117 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:56.117 "is_configured": false, 00:08:56.117 "data_offset": 0, 00:08:56.117 "data_size": 63488 00:08:56.117 }, 00:08:56.117 { 00:08:56.117 "name": "BaseBdev3", 00:08:56.117 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:56.117 "is_configured": true, 00:08:56.117 "data_offset": 2048, 00:08:56.117 "data_size": 63488 00:08:56.117 } 00:08:56.117 ] 00:08:56.117 }' 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.117 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 [2024-11-26 22:53:35.452552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.379 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.650 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.650 "name": "Existed_Raid", 00:08:56.650 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:56.650 "strip_size_kb": 64, 00:08:56.650 "state": "configuring", 00:08:56.650 "raid_level": "concat", 00:08:56.650 "superblock": true, 00:08:56.650 "num_base_bdevs": 3, 00:08:56.650 "num_base_bdevs_discovered": 2, 00:08:56.650 "num_base_bdevs_operational": 3, 00:08:56.650 "base_bdevs_list": [ 00:08:56.650 { 00:08:56.650 "name": null, 00:08:56.650 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:56.650 "is_configured": false, 00:08:56.650 "data_offset": 0, 00:08:56.650 "data_size": 63488 00:08:56.650 }, 00:08:56.650 { 00:08:56.650 "name": "BaseBdev2", 00:08:56.650 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:56.650 "is_configured": true, 00:08:56.650 "data_offset": 2048, 00:08:56.650 "data_size": 63488 00:08:56.650 }, 00:08:56.650 { 00:08:56.650 "name": "BaseBdev3", 00:08:56.650 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:56.650 "is_configured": true, 00:08:56.650 "data_offset": 2048, 00:08:56.650 "data_size": 63488 00:08:56.650 } 00:08:56.650 ] 00:08:56.650 }' 00:08:56.650 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.650 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.910 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.910 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.910 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dbc556ac-f6a6-4304-94a5-1568d5ed629c 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 [2024-11-26 22:53:35.955454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:56.911 [2024-11-26 22:53:35.955608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.911 [2024-11-26 22:53:35.955620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.911 [2024-11-26 22:53:35.955855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:56.911 [2024-11-26 22:53:35.955971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.911 [2024-11-26 22:53:35.955985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:56.911 [2024-11-26 22:53:35.956080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.911 NewBaseBdev 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 [ 00:08:56.911 { 00:08:56.911 "name": "NewBaseBdev", 00:08:56.911 "aliases": [ 00:08:56.911 "dbc556ac-f6a6-4304-94a5-1568d5ed629c" 00:08:56.911 ], 00:08:56.911 "product_name": "Malloc disk", 00:08:56.911 "block_size": 512, 00:08:56.911 "num_blocks": 65536, 00:08:56.911 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:56.911 "assigned_rate_limits": { 00:08:56.911 "rw_ios_per_sec": 0, 00:08:56.911 "rw_mbytes_per_sec": 0, 00:08:56.911 "r_mbytes_per_sec": 0, 00:08:56.911 "w_mbytes_per_sec": 0 00:08:56.911 }, 00:08:56.911 "claimed": true, 00:08:56.911 "claim_type": "exclusive_write", 00:08:56.911 "zoned": false, 00:08:56.911 "supported_io_types": { 00:08:56.911 "read": true, 00:08:56.911 "write": true, 00:08:56.911 "unmap": true, 00:08:56.911 "flush": true, 00:08:56.911 "reset": true, 00:08:56.911 "nvme_admin": false, 00:08:56.911 "nvme_io": false, 00:08:56.911 "nvme_io_md": false, 00:08:56.911 "write_zeroes": true, 00:08:56.911 "zcopy": true, 00:08:56.911 "get_zone_info": false, 00:08:56.911 "zone_management": false, 00:08:56.911 "zone_append": false, 00:08:56.911 "compare": false, 00:08:56.911 "compare_and_write": false, 00:08:56.911 "abort": true, 00:08:56.911 "seek_hole": false, 00:08:56.911 "seek_data": false, 00:08:56.911 "copy": true, 00:08:56.911 "nvme_iov_md": false 00:08:56.911 }, 00:08:56.911 "memory_domains": [ 00:08:56.911 { 00:08:56.911 "dma_device_id": "system", 00:08:56.911 "dma_device_type": 1 00:08:56.911 }, 00:08:56.911 { 00:08:56.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.911 "dma_device_type": 2 00:08:56.911 } 00:08:56.911 ], 00:08:56.911 "driver_specific": {} 00:08:56.911 } 00:08:56.911 ] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.911 22:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.171 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.171 "name": "Existed_Raid", 00:08:57.171 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:57.171 "strip_size_kb": 64, 00:08:57.171 "state": "online", 00:08:57.171 "raid_level": "concat", 00:08:57.171 "superblock": true, 00:08:57.171 "num_base_bdevs": 3, 00:08:57.171 "num_base_bdevs_discovered": 3, 00:08:57.171 "num_base_bdevs_operational": 3, 00:08:57.171 "base_bdevs_list": [ 00:08:57.171 { 00:08:57.171 "name": "NewBaseBdev", 00:08:57.171 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:57.171 "is_configured": true, 00:08:57.171 "data_offset": 2048, 00:08:57.171 "data_size": 63488 00:08:57.171 }, 00:08:57.171 { 00:08:57.171 "name": "BaseBdev2", 00:08:57.171 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:57.171 "is_configured": true, 00:08:57.171 "data_offset": 2048, 00:08:57.171 "data_size": 63488 00:08:57.171 }, 00:08:57.171 { 00:08:57.171 "name": "BaseBdev3", 00:08:57.171 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:57.171 "is_configured": true, 00:08:57.171 "data_offset": 2048, 00:08:57.171 "data_size": 63488 00:08:57.171 } 00:08:57.171 ] 00:08:57.171 }' 00:08:57.171 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.171 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.431 [2024-11-26 22:53:36.427943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.431 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.431 "name": "Existed_Raid", 00:08:57.431 "aliases": [ 00:08:57.431 "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2" 00:08:57.431 ], 00:08:57.431 "product_name": "Raid Volume", 00:08:57.431 "block_size": 512, 00:08:57.431 "num_blocks": 190464, 00:08:57.431 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:57.431 "assigned_rate_limits": { 00:08:57.431 "rw_ios_per_sec": 0, 00:08:57.431 "rw_mbytes_per_sec": 0, 00:08:57.431 "r_mbytes_per_sec": 0, 00:08:57.431 "w_mbytes_per_sec": 0 00:08:57.431 }, 00:08:57.431 "claimed": false, 00:08:57.431 "zoned": false, 00:08:57.431 "supported_io_types": { 00:08:57.431 "read": true, 00:08:57.431 "write": true, 00:08:57.431 "unmap": true, 00:08:57.431 "flush": true, 00:08:57.431 "reset": true, 00:08:57.431 "nvme_admin": false, 00:08:57.431 "nvme_io": false, 00:08:57.431 "nvme_io_md": false, 00:08:57.431 "write_zeroes": true, 00:08:57.431 "zcopy": false, 00:08:57.431 "get_zone_info": false, 00:08:57.431 "zone_management": false, 00:08:57.431 "zone_append": false, 00:08:57.431 "compare": false, 00:08:57.431 "compare_and_write": false, 00:08:57.431 "abort": false, 00:08:57.431 "seek_hole": false, 00:08:57.431 "seek_data": false, 00:08:57.431 "copy": false, 00:08:57.431 "nvme_iov_md": false 00:08:57.431 }, 00:08:57.431 "memory_domains": [ 00:08:57.431 { 00:08:57.431 "dma_device_id": "system", 00:08:57.431 "dma_device_type": 1 00:08:57.431 }, 00:08:57.431 { 00:08:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.431 "dma_device_type": 2 00:08:57.431 }, 00:08:57.431 { 00:08:57.431 "dma_device_id": "system", 00:08:57.431 "dma_device_type": 1 00:08:57.431 }, 00:08:57.431 { 00:08:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.431 "dma_device_type": 2 00:08:57.431 }, 00:08:57.431 { 00:08:57.431 "dma_device_id": "system", 00:08:57.431 "dma_device_type": 1 00:08:57.431 }, 00:08:57.431 { 00:08:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.431 "dma_device_type": 2 00:08:57.431 } 00:08:57.431 ], 00:08:57.431 "driver_specific": { 00:08:57.431 "raid": { 00:08:57.431 "uuid": "b5a1e3d8-69a5-4979-8ca4-06e8575ca2b2", 00:08:57.431 "strip_size_kb": 64, 00:08:57.431 "state": "online", 00:08:57.431 "raid_level": "concat", 00:08:57.431 "superblock": true, 00:08:57.431 "num_base_bdevs": 3, 00:08:57.431 "num_base_bdevs_discovered": 3, 00:08:57.431 "num_base_bdevs_operational": 3, 00:08:57.431 "base_bdevs_list": [ 00:08:57.432 { 00:08:57.432 "name": "NewBaseBdev", 00:08:57.432 "uuid": "dbc556ac-f6a6-4304-94a5-1568d5ed629c", 00:08:57.432 "is_configured": true, 00:08:57.432 "data_offset": 2048, 00:08:57.432 "data_size": 63488 00:08:57.432 }, 00:08:57.432 { 00:08:57.432 "name": "BaseBdev2", 00:08:57.432 "uuid": "41d052df-8833-4b50-94b1-01fba0ae1fe3", 00:08:57.432 "is_configured": true, 00:08:57.432 "data_offset": 2048, 00:08:57.432 "data_size": 63488 00:08:57.432 }, 00:08:57.432 { 00:08:57.432 "name": "BaseBdev3", 00:08:57.432 "uuid": "a7fbc350-a883-4f0d-af6e-d5a9d92f46b1", 00:08:57.432 "is_configured": true, 00:08:57.432 "data_offset": 2048, 00:08:57.432 "data_size": 63488 00:08:57.432 } 00:08:57.432 ] 00:08:57.432 } 00:08:57.432 } 00:08:57.432 }' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:57.432 BaseBdev2 00:08:57.432 BaseBdev3' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.432 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.692 [2024-11-26 22:53:36.667706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.692 [2024-11-26 22:53:36.667738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.692 [2024-11-26 22:53:36.667804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.692 [2024-11-26 22:53:36.667859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.692 [2024-11-26 22:53:36.667869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78955 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78955 ']' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78955 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78955 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.692 killing process with pid 78955 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78955' 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78955 00:08:57.692 [2024-11-26 22:53:36.715729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.692 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78955 00:08:57.692 [2024-11-26 22:53:36.745932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.952 22:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.952 00:08:57.952 real 0m8.745s 00:08:57.952 user 0m14.934s 00:08:57.952 sys 0m1.757s 00:08:57.952 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.952 22:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.952 ************************************ 00:08:57.952 END TEST raid_state_function_test_sb 00:08:57.952 ************************************ 00:08:57.952 22:53:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:57.952 22:53:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.952 22:53:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.952 22:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.952 ************************************ 00:08:57.952 START TEST raid_superblock_test 00:08:57.952 ************************************ 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:57.952 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79554 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79554 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79554 ']' 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.953 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.213 [2024-11-26 22:53:37.121602] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:58.213 [2024-11-26 22:53:37.121732] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79554 ] 00:08:58.213 [2024-11-26 22:53:37.256064] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.213 [2024-11-26 22:53:37.294493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.213 [2024-11-26 22:53:37.320125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.472 [2024-11-26 22:53:37.361512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.472 [2024-11-26 22:53:37.361565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 malloc1 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 [2024-11-26 22:53:37.969257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.042 [2024-11-26 22:53:37.969339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.042 [2024-11-26 22:53:37.969363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:59.042 [2024-11-26 22:53:37.969374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.042 [2024-11-26 22:53:37.971357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.042 [2024-11-26 22:53:37.971395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.042 pt1 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 malloc2 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 [2024-11-26 22:53:37.997461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.042 [2024-11-26 22:53:37.997511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.042 [2024-11-26 22:53:37.997528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:59.042 [2024-11-26 22:53:37.997535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.042 [2024-11-26 22:53:37.999466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.042 [2024-11-26 22:53:37.999500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.042 pt2 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 malloc3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 [2024-11-26 22:53:38.025600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.042 [2024-11-26 22:53:38.025645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.042 [2024-11-26 22:53:38.025664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:59.042 [2024-11-26 22:53:38.025672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.042 [2024-11-26 22:53:38.027658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.042 [2024-11-26 22:53:38.027692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.042 pt3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 [2024-11-26 22:53:38.037645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.042 [2024-11-26 22:53:38.039380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.042 [2024-11-26 22:53:38.039439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.042 [2024-11-26 22:53:38.039577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:59.042 [2024-11-26 22:53:38.039596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.042 [2024-11-26 22:53:38.039818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:59.042 [2024-11-26 22:53:38.039948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:59.042 [2024-11-26 22:53:38.039975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:59.042 [2024-11-26 22:53:38.040081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.042 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.042 "name": "raid_bdev1", 00:08:59.042 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:08:59.042 "strip_size_kb": 64, 00:08:59.042 "state": "online", 00:08:59.043 "raid_level": "concat", 00:08:59.043 "superblock": true, 00:08:59.043 "num_base_bdevs": 3, 00:08:59.043 "num_base_bdevs_discovered": 3, 00:08:59.043 "num_base_bdevs_operational": 3, 00:08:59.043 "base_bdevs_list": [ 00:08:59.043 { 00:08:59.043 "name": "pt1", 00:08:59.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.043 "is_configured": true, 00:08:59.043 "data_offset": 2048, 00:08:59.043 "data_size": 63488 00:08:59.043 }, 00:08:59.043 { 00:08:59.043 "name": "pt2", 00:08:59.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.043 "is_configured": true, 00:08:59.043 "data_offset": 2048, 00:08:59.043 "data_size": 63488 00:08:59.043 }, 00:08:59.043 { 00:08:59.043 "name": "pt3", 00:08:59.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.043 "is_configured": true, 00:08:59.043 "data_offset": 2048, 00:08:59.043 "data_size": 63488 00:08:59.043 } 00:08:59.043 ] 00:08:59.043 }' 00:08:59.043 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.043 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.640 [2024-11-26 22:53:38.498062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.640 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.640 "name": "raid_bdev1", 00:08:59.641 "aliases": [ 00:08:59.641 "49f372cb-3f7c-4e58-a66f-0c50971d3e4e" 00:08:59.641 ], 00:08:59.641 "product_name": "Raid Volume", 00:08:59.641 "block_size": 512, 00:08:59.641 "num_blocks": 190464, 00:08:59.641 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:08:59.641 "assigned_rate_limits": { 00:08:59.641 "rw_ios_per_sec": 0, 00:08:59.641 "rw_mbytes_per_sec": 0, 00:08:59.641 "r_mbytes_per_sec": 0, 00:08:59.641 "w_mbytes_per_sec": 0 00:08:59.641 }, 00:08:59.641 "claimed": false, 00:08:59.641 "zoned": false, 00:08:59.641 "supported_io_types": { 00:08:59.641 "read": true, 00:08:59.641 "write": true, 00:08:59.641 "unmap": true, 00:08:59.641 "flush": true, 00:08:59.641 "reset": true, 00:08:59.641 "nvme_admin": false, 00:08:59.641 "nvme_io": false, 00:08:59.641 "nvme_io_md": false, 00:08:59.641 "write_zeroes": true, 00:08:59.641 "zcopy": false, 00:08:59.641 "get_zone_info": false, 00:08:59.641 "zone_management": false, 00:08:59.641 "zone_append": false, 00:08:59.641 "compare": false, 00:08:59.641 "compare_and_write": false, 00:08:59.641 "abort": false, 00:08:59.641 "seek_hole": false, 00:08:59.641 "seek_data": false, 00:08:59.641 "copy": false, 00:08:59.641 "nvme_iov_md": false 00:08:59.641 }, 00:08:59.641 "memory_domains": [ 00:08:59.641 { 00:08:59.641 "dma_device_id": "system", 00:08:59.641 "dma_device_type": 1 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.641 "dma_device_type": 2 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "system", 00:08:59.641 "dma_device_type": 1 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.641 "dma_device_type": 2 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "system", 00:08:59.641 "dma_device_type": 1 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.641 "dma_device_type": 2 00:08:59.641 } 00:08:59.641 ], 00:08:59.641 "driver_specific": { 00:08:59.641 "raid": { 00:08:59.641 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:08:59.641 "strip_size_kb": 64, 00:08:59.641 "state": "online", 00:08:59.641 "raid_level": "concat", 00:08:59.641 "superblock": true, 00:08:59.641 "num_base_bdevs": 3, 00:08:59.641 "num_base_bdevs_discovered": 3, 00:08:59.641 "num_base_bdevs_operational": 3, 00:08:59.641 "base_bdevs_list": [ 00:08:59.641 { 00:08:59.641 "name": "pt1", 00:08:59.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.641 "is_configured": true, 00:08:59.641 "data_offset": 2048, 00:08:59.641 "data_size": 63488 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "name": "pt2", 00:08:59.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.641 "is_configured": true, 00:08:59.641 "data_offset": 2048, 00:08:59.641 "data_size": 63488 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "name": "pt3", 00:08:59.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.641 "is_configured": true, 00:08:59.641 "data_offset": 2048, 00:08:59.641 "data_size": 63488 00:08:59.641 } 00:08:59.641 ] 00:08:59.641 } 00:08:59.641 } 00:08:59.641 }' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.641 pt2 00:08:59.641 pt3' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:59.641 [2024-11-26 22:53:38.726055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.641 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49f372cb-3f7c-4e58-a66f-0c50971d3e4e 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49f372cb-3f7c-4e58-a66f-0c50971d3e4e ']' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 [2024-11-26 22:53:38.773794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.901 [2024-11-26 22:53:38.773822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.901 [2024-11-26 22:53:38.773906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.901 [2024-11-26 22:53:38.773971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.901 [2024-11-26 22:53:38.773980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 [2024-11-26 22:53:38.917885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:59.901 [2024-11-26 22:53:38.919717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:59.901 [2024-11-26 22:53:38.919769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:59.901 [2024-11-26 22:53:38.919816] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:59.901 [2024-11-26 22:53:38.919860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:59.901 [2024-11-26 22:53:38.919877] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:59.901 [2024-11-26 22:53:38.919893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.901 [2024-11-26 22:53:38.919902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:59.901 request: 00:08:59.901 { 00:08:59.901 "name": "raid_bdev1", 00:08:59.901 "raid_level": "concat", 00:08:59.901 "base_bdevs": [ 00:08:59.901 "malloc1", 00:08:59.901 "malloc2", 00:08:59.901 "malloc3" 00:08:59.901 ], 00:08:59.901 "strip_size_kb": 64, 00:08:59.901 "superblock": false, 00:08:59.901 "method": "bdev_raid_create", 00:08:59.901 "req_id": 1 00:08:59.901 } 00:08:59.901 Got JSON-RPC error response 00:08:59.901 response: 00:08:59.901 { 00:08:59.901 "code": -17, 00:08:59.901 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:59.901 } 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:59.901 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.902 [2024-11-26 22:53:38.993852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.902 [2024-11-26 22:53:38.993906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.902 [2024-11-26 22:53:38.993928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:59.902 [2024-11-26 22:53:38.993937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.902 [2024-11-26 22:53:38.996019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.902 [2024-11-26 22:53:38.996055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.902 [2024-11-26 22:53:38.996121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:59.902 [2024-11-26 22:53:38.996172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.902 pt1 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.902 22:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.902 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.161 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.161 "name": "raid_bdev1", 00:09:00.161 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:09:00.161 "strip_size_kb": 64, 00:09:00.161 "state": "configuring", 00:09:00.161 "raid_level": "concat", 00:09:00.161 "superblock": true, 00:09:00.161 "num_base_bdevs": 3, 00:09:00.161 "num_base_bdevs_discovered": 1, 00:09:00.161 "num_base_bdevs_operational": 3, 00:09:00.161 "base_bdevs_list": [ 00:09:00.161 { 00:09:00.161 "name": "pt1", 00:09:00.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.161 "is_configured": true, 00:09:00.161 "data_offset": 2048, 00:09:00.161 "data_size": 63488 00:09:00.161 }, 00:09:00.161 { 00:09:00.161 "name": null, 00:09:00.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.161 "is_configured": false, 00:09:00.161 "data_offset": 2048, 00:09:00.161 "data_size": 63488 00:09:00.161 }, 00:09:00.161 { 00:09:00.161 "name": null, 00:09:00.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.161 "is_configured": false, 00:09:00.161 "data_offset": 2048, 00:09:00.161 "data_size": 63488 00:09:00.161 } 00:09:00.161 ] 00:09:00.161 }' 00:09:00.161 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.161 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.420 [2024-11-26 22:53:39.433997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.420 [2024-11-26 22:53:39.434064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.420 [2024-11-26 22:53:39.434089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:00.420 [2024-11-26 22:53:39.434099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.420 [2024-11-26 22:53:39.434537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.420 [2024-11-26 22:53:39.434564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.420 [2024-11-26 22:53:39.434640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.420 [2024-11-26 22:53:39.434673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.420 pt2 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.420 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.421 [2024-11-26 22:53:39.446020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.421 "name": "raid_bdev1", 00:09:00.421 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:09:00.421 "strip_size_kb": 64, 00:09:00.421 "state": "configuring", 00:09:00.421 "raid_level": "concat", 00:09:00.421 "superblock": true, 00:09:00.421 "num_base_bdevs": 3, 00:09:00.421 "num_base_bdevs_discovered": 1, 00:09:00.421 "num_base_bdevs_operational": 3, 00:09:00.421 "base_bdevs_list": [ 00:09:00.421 { 00:09:00.421 "name": "pt1", 00:09:00.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.421 "is_configured": true, 00:09:00.421 "data_offset": 2048, 00:09:00.421 "data_size": 63488 00:09:00.421 }, 00:09:00.421 { 00:09:00.421 "name": null, 00:09:00.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.421 "is_configured": false, 00:09:00.421 "data_offset": 0, 00:09:00.421 "data_size": 63488 00:09:00.421 }, 00:09:00.421 { 00:09:00.421 "name": null, 00:09:00.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.421 "is_configured": false, 00:09:00.421 "data_offset": 2048, 00:09:00.421 "data_size": 63488 00:09:00.421 } 00:09:00.421 ] 00:09:00.421 }' 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.421 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 [2024-11-26 22:53:39.906119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.991 [2024-11-26 22:53:39.906201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.991 [2024-11-26 22:53:39.906220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:00.991 [2024-11-26 22:53:39.906234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.991 [2024-11-26 22:53:39.906629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.991 [2024-11-26 22:53:39.906657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.991 [2024-11-26 22:53:39.906724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.991 [2024-11-26 22:53:39.906752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.991 pt2 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 [2024-11-26 22:53:39.918099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:00.991 [2024-11-26 22:53:39.918147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.991 [2024-11-26 22:53:39.918166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:00.991 [2024-11-26 22:53:39.918177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.991 [2024-11-26 22:53:39.918518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.991 [2024-11-26 22:53:39.918546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:00.991 [2024-11-26 22:53:39.918600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:00.991 [2024-11-26 22:53:39.918620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:00.991 [2024-11-26 22:53:39.918709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:00.991 [2024-11-26 22:53:39.918727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.991 [2024-11-26 22:53:39.918956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:00.991 [2024-11-26 22:53:39.919083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:00.991 [2024-11-26 22:53:39.919097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:00.991 [2024-11-26 22:53:39.919191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.991 pt3 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.991 "name": "raid_bdev1", 00:09:00.991 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:09:00.991 "strip_size_kb": 64, 00:09:00.991 "state": "online", 00:09:00.991 "raid_level": "concat", 00:09:00.991 "superblock": true, 00:09:00.991 "num_base_bdevs": 3, 00:09:00.991 "num_base_bdevs_discovered": 3, 00:09:00.991 "num_base_bdevs_operational": 3, 00:09:00.991 "base_bdevs_list": [ 00:09:00.991 { 00:09:00.991 "name": "pt1", 00:09:00.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.991 "is_configured": true, 00:09:00.991 "data_offset": 2048, 00:09:00.991 "data_size": 63488 00:09:00.991 }, 00:09:00.991 { 00:09:00.991 "name": "pt2", 00:09:00.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.991 "is_configured": true, 00:09:00.991 "data_offset": 2048, 00:09:00.991 "data_size": 63488 00:09:00.991 }, 00:09:00.991 { 00:09:00.991 "name": "pt3", 00:09:00.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.991 "is_configured": true, 00:09:00.991 "data_offset": 2048, 00:09:00.991 "data_size": 63488 00:09:00.991 } 00:09:00.991 ] 00:09:00.991 }' 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.991 22:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.251 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.252 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.252 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.252 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.252 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.252 [2024-11-26 22:53:40.358565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.252 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.511 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.512 "name": "raid_bdev1", 00:09:01.512 "aliases": [ 00:09:01.512 "49f372cb-3f7c-4e58-a66f-0c50971d3e4e" 00:09:01.512 ], 00:09:01.512 "product_name": "Raid Volume", 00:09:01.512 "block_size": 512, 00:09:01.512 "num_blocks": 190464, 00:09:01.512 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:09:01.512 "assigned_rate_limits": { 00:09:01.512 "rw_ios_per_sec": 0, 00:09:01.512 "rw_mbytes_per_sec": 0, 00:09:01.512 "r_mbytes_per_sec": 0, 00:09:01.512 "w_mbytes_per_sec": 0 00:09:01.512 }, 00:09:01.512 "claimed": false, 00:09:01.512 "zoned": false, 00:09:01.512 "supported_io_types": { 00:09:01.512 "read": true, 00:09:01.512 "write": true, 00:09:01.512 "unmap": true, 00:09:01.512 "flush": true, 00:09:01.512 "reset": true, 00:09:01.512 "nvme_admin": false, 00:09:01.512 "nvme_io": false, 00:09:01.512 "nvme_io_md": false, 00:09:01.512 "write_zeroes": true, 00:09:01.512 "zcopy": false, 00:09:01.512 "get_zone_info": false, 00:09:01.512 "zone_management": false, 00:09:01.512 "zone_append": false, 00:09:01.512 "compare": false, 00:09:01.512 "compare_and_write": false, 00:09:01.512 "abort": false, 00:09:01.512 "seek_hole": false, 00:09:01.512 "seek_data": false, 00:09:01.512 "copy": false, 00:09:01.512 "nvme_iov_md": false 00:09:01.512 }, 00:09:01.512 "memory_domains": [ 00:09:01.512 { 00:09:01.512 "dma_device_id": "system", 00:09:01.512 "dma_device_type": 1 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.512 "dma_device_type": 2 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "dma_device_id": "system", 00:09:01.512 "dma_device_type": 1 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.512 "dma_device_type": 2 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "dma_device_id": "system", 00:09:01.512 "dma_device_type": 1 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.512 "dma_device_type": 2 00:09:01.512 } 00:09:01.512 ], 00:09:01.512 "driver_specific": { 00:09:01.512 "raid": { 00:09:01.512 "uuid": "49f372cb-3f7c-4e58-a66f-0c50971d3e4e", 00:09:01.512 "strip_size_kb": 64, 00:09:01.512 "state": "online", 00:09:01.512 "raid_level": "concat", 00:09:01.512 "superblock": true, 00:09:01.512 "num_base_bdevs": 3, 00:09:01.512 "num_base_bdevs_discovered": 3, 00:09:01.512 "num_base_bdevs_operational": 3, 00:09:01.512 "base_bdevs_list": [ 00:09:01.512 { 00:09:01.512 "name": "pt1", 00:09:01.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.512 "is_configured": true, 00:09:01.512 "data_offset": 2048, 00:09:01.512 "data_size": 63488 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "name": "pt2", 00:09:01.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.512 "is_configured": true, 00:09:01.512 "data_offset": 2048, 00:09:01.512 "data_size": 63488 00:09:01.512 }, 00:09:01.512 { 00:09:01.512 "name": "pt3", 00:09:01.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.512 "is_configured": true, 00:09:01.512 "data_offset": 2048, 00:09:01.512 "data_size": 63488 00:09:01.512 } 00:09:01.512 ] 00:09:01.512 } 00:09:01.512 } 00:09:01.512 }' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.512 pt2 00:09:01.512 pt3' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.512 [2024-11-26 22:53:40.594596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49f372cb-3f7c-4e58-a66f-0c50971d3e4e '!=' 49f372cb-3f7c-4e58-a66f-0c50971d3e4e ']' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79554 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79554 ']' 00:09:01.512 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79554 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79554 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.773 killing process with pid 79554 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79554' 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79554 00:09:01.773 [2024-11-26 22:53:40.680188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.773 [2024-11-26 22:53:40.680297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.773 [2024-11-26 22:53:40.680365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.773 [2024-11-26 22:53:40.680382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:01.773 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79554 00:09:01.773 [2024-11-26 22:53:40.713125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.032 22:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:02.032 00:09:02.032 real 0m3.897s 00:09:02.032 user 0m6.136s 00:09:02.032 sys 0m0.855s 00:09:02.032 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.032 22:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.032 ************************************ 00:09:02.032 END TEST raid_superblock_test 00:09:02.032 ************************************ 00:09:02.032 22:53:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:02.032 22:53:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.032 22:53:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.032 22:53:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.032 ************************************ 00:09:02.032 START TEST raid_read_error_test 00:09:02.032 ************************************ 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NmmV59v5ax 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79795 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79795 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79795 ']' 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.032 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.032 [2024-11-26 22:53:41.102997] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:02.032 [2024-11-26 22:53:41.103123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79795 ] 00:09:02.292 [2024-11-26 22:53:41.237119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.292 [2024-11-26 22:53:41.274472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.292 [2024-11-26 22:53:41.299341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.292 [2024-11-26 22:53:41.340975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.292 [2024-11-26 22:53:41.341007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.862 BaseBdev1_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.862 true 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.862 [2024-11-26 22:53:41.957124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.862 [2024-11-26 22:53:41.957219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.862 [2024-11-26 22:53:41.957261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.862 [2024-11-26 22:53:41.957275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.862 [2024-11-26 22:53:41.959448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.862 [2024-11-26 22:53:41.959483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.862 BaseBdev1 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.862 BaseBdev2_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.862 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 true 00:09:03.123 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.123 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 [2024-11-26 22:53:41.997482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.123 [2024-11-26 22:53:41.997527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.123 [2024-11-26 22:53:41.997542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.123 [2024-11-26 22:53:41.997551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.123 [2024-11-26 22:53:41.999578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.123 [2024-11-26 22:53:41.999665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.123 BaseBdev2 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 BaseBdev3_malloc 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 true 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 [2024-11-26 22:53:42.037798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:03.123 [2024-11-26 22:53:42.037885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.123 [2024-11-26 22:53:42.037905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:03.123 [2024-11-26 22:53:42.037915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.123 [2024-11-26 22:53:42.040031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.123 [2024-11-26 22:53:42.040066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:03.123 BaseBdev3 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 [2024-11-26 22:53:42.049845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.123 [2024-11-26 22:53:42.051603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.123 [2024-11-26 22:53:42.051669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.123 [2024-11-26 22:53:42.051834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.123 [2024-11-26 22:53:42.051845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.123 [2024-11-26 22:53:42.052087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:03.123 [2024-11-26 22:53:42.052234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.123 [2024-11-26 22:53:42.052250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:03.123 [2024-11-26 22:53:42.052370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.123 "name": "raid_bdev1", 00:09:03.123 "uuid": "2f27c46b-da82-4900-ae01-2d3bb89f7ce2", 00:09:03.123 "strip_size_kb": 64, 00:09:03.123 "state": "online", 00:09:03.123 "raid_level": "concat", 00:09:03.123 "superblock": true, 00:09:03.123 "num_base_bdevs": 3, 00:09:03.123 "num_base_bdevs_discovered": 3, 00:09:03.123 "num_base_bdevs_operational": 3, 00:09:03.123 "base_bdevs_list": [ 00:09:03.123 { 00:09:03.123 "name": "BaseBdev1", 00:09:03.123 "uuid": "4ad71811-92de-56d3-826f-94947ba9dc49", 00:09:03.123 "is_configured": true, 00:09:03.123 "data_offset": 2048, 00:09:03.123 "data_size": 63488 00:09:03.123 }, 00:09:03.123 { 00:09:03.123 "name": "BaseBdev2", 00:09:03.123 "uuid": "58de3506-6ae6-573d-8c5b-8ba471615659", 00:09:03.123 "is_configured": true, 00:09:03.123 "data_offset": 2048, 00:09:03.123 "data_size": 63488 00:09:03.123 }, 00:09:03.123 { 00:09:03.123 "name": "BaseBdev3", 00:09:03.123 "uuid": "54f2da17-03b1-5c53-9900-e85aac4db994", 00:09:03.123 "is_configured": true, 00:09:03.123 "data_offset": 2048, 00:09:03.123 "data_size": 63488 00:09:03.123 } 00:09:03.123 ] 00:09:03.123 }' 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.123 22:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.384 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.384 22:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.644 [2024-11-26 22:53:42.570387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.583 "name": "raid_bdev1", 00:09:04.583 "uuid": "2f27c46b-da82-4900-ae01-2d3bb89f7ce2", 00:09:04.583 "strip_size_kb": 64, 00:09:04.583 "state": "online", 00:09:04.583 "raid_level": "concat", 00:09:04.583 "superblock": true, 00:09:04.583 "num_base_bdevs": 3, 00:09:04.583 "num_base_bdevs_discovered": 3, 00:09:04.583 "num_base_bdevs_operational": 3, 00:09:04.583 "base_bdevs_list": [ 00:09:04.583 { 00:09:04.583 "name": "BaseBdev1", 00:09:04.583 "uuid": "4ad71811-92de-56d3-826f-94947ba9dc49", 00:09:04.583 "is_configured": true, 00:09:04.583 "data_offset": 2048, 00:09:04.583 "data_size": 63488 00:09:04.583 }, 00:09:04.583 { 00:09:04.583 "name": "BaseBdev2", 00:09:04.583 "uuid": "58de3506-6ae6-573d-8c5b-8ba471615659", 00:09:04.583 "is_configured": true, 00:09:04.583 "data_offset": 2048, 00:09:04.583 "data_size": 63488 00:09:04.583 }, 00:09:04.583 { 00:09:04.583 "name": "BaseBdev3", 00:09:04.583 "uuid": "54f2da17-03b1-5c53-9900-e85aac4db994", 00:09:04.583 "is_configured": true, 00:09:04.583 "data_offset": 2048, 00:09:04.583 "data_size": 63488 00:09:04.583 } 00:09:04.583 ] 00:09:04.583 }' 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.583 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.842 [2024-11-26 22:53:43.871776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.842 [2024-11-26 22:53:43.871818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.842 [2024-11-26 22:53:43.874337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.842 [2024-11-26 22:53:43.874384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.842 [2024-11-26 22:53:43.874420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.842 [2024-11-26 22:53:43.874429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:04.842 { 00:09:04.842 "results": [ 00:09:04.842 { 00:09:04.842 "job": "raid_bdev1", 00:09:04.842 "core_mask": "0x1", 00:09:04.842 "workload": "randrw", 00:09:04.842 "percentage": 50, 00:09:04.842 "status": "finished", 00:09:04.842 "queue_depth": 1, 00:09:04.842 "io_size": 131072, 00:09:04.842 "runtime": 1.299626, 00:09:04.842 "iops": 17011.047793749894, 00:09:04.842 "mibps": 2126.380974218737, 00:09:04.842 "io_failed": 1, 00:09:04.842 "io_timeout": 0, 00:09:04.842 "avg_latency_us": 81.0272545014478, 00:09:04.842 "min_latency_us": 24.87928179203347, 00:09:04.842 "max_latency_us": 1313.8045846770679 00:09:04.842 } 00:09:04.842 ], 00:09:04.842 "core_count": 1 00:09:04.842 } 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79795 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79795 ']' 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79795 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79795 00:09:04.842 killing process with pid 79795 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.842 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79795' 00:09:04.843 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79795 00:09:04.843 [2024-11-26 22:53:43.918417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.843 22:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79795 00:09:04.843 [2024-11-26 22:53:43.942825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NmmV59v5ax 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.102 ************************************ 00:09:05.102 END TEST raid_read_error_test 00:09:05.102 ************************************ 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:09:05.102 00:09:05.102 real 0m3.163s 00:09:05.102 user 0m3.956s 00:09:05.102 sys 0m0.531s 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.102 22:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.102 22:53:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:05.102 22:53:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.102 22:53:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.102 22:53:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.361 ************************************ 00:09:05.361 START TEST raid_write_error_test 00:09:05.361 ************************************ 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0mErthfggv 00:09:05.361 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79924 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79924 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 79924 ']' 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.362 22:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.362 [2024-11-26 22:53:44.354365] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:05.362 [2024-11-26 22:53:44.354485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79924 ] 00:09:05.621 [2024-11-26 22:53:44.493043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.621 [2024-11-26 22:53:44.518919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.621 [2024-11-26 22:53:44.543879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.621 [2024-11-26 22:53:44.585204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.621 [2024-11-26 22:53:44.585245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.189 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.189 BaseBdev1_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 true 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 [2024-11-26 22:53:45.189181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.190 [2024-11-26 22:53:45.189369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.190 [2024-11-26 22:53:45.189408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:06.190 [2024-11-26 22:53:45.189440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.190 [2024-11-26 22:53:45.191508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.190 [2024-11-26 22:53:45.191592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:06.190 BaseBdev1 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 BaseBdev2_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 true 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 [2024-11-26 22:53:45.229385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:06.190 [2024-11-26 22:53:45.229490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.190 [2024-11-26 22:53:45.229538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:06.190 [2024-11-26 22:53:45.229567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.190 [2024-11-26 22:53:45.231578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.190 [2024-11-26 22:53:45.231645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:06.190 BaseBdev2 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 BaseBdev3_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 true 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 [2024-11-26 22:53:45.269569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:06.190 [2024-11-26 22:53:45.269670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.190 [2024-11-26 22:53:45.269718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:06.190 [2024-11-26 22:53:45.269747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.190 [2024-11-26 22:53:45.271763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.190 [2024-11-26 22:53:45.271830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:06.190 BaseBdev3 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 [2024-11-26 22:53:45.281641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.190 [2024-11-26 22:53:45.283416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.190 [2024-11-26 22:53:45.283481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.190 [2024-11-26 22:53:45.283648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.190 [2024-11-26 22:53:45.283659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.190 [2024-11-26 22:53:45.283875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:06.190 [2024-11-26 22:53:45.284007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.190 [2024-11-26 22:53:45.284018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:06.190 [2024-11-26 22:53:45.284112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.190 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.449 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.449 "name": "raid_bdev1", 00:09:06.449 "uuid": "ff75a37b-1b84-43b7-8f24-f490ccb4645a", 00:09:06.449 "strip_size_kb": 64, 00:09:06.449 "state": "online", 00:09:06.449 "raid_level": "concat", 00:09:06.449 "superblock": true, 00:09:06.449 "num_base_bdevs": 3, 00:09:06.449 "num_base_bdevs_discovered": 3, 00:09:06.449 "num_base_bdevs_operational": 3, 00:09:06.449 "base_bdevs_list": [ 00:09:06.449 { 00:09:06.449 "name": "BaseBdev1", 00:09:06.449 "uuid": "e06e5201-4ba5-5f9b-87cc-2b49d11755d8", 00:09:06.449 "is_configured": true, 00:09:06.449 "data_offset": 2048, 00:09:06.449 "data_size": 63488 00:09:06.449 }, 00:09:06.449 { 00:09:06.449 "name": "BaseBdev2", 00:09:06.449 "uuid": "623ff178-23d1-5a45-bc55-4beed50779f6", 00:09:06.449 "is_configured": true, 00:09:06.449 "data_offset": 2048, 00:09:06.449 "data_size": 63488 00:09:06.449 }, 00:09:06.449 { 00:09:06.449 "name": "BaseBdev3", 00:09:06.449 "uuid": "42b79834-d7ab-5c28-95bd-9ed5c4c7127b", 00:09:06.449 "is_configured": true, 00:09:06.449 "data_offset": 2048, 00:09:06.449 "data_size": 63488 00:09:06.449 } 00:09:06.449 ] 00:09:06.449 }' 00:09:06.449 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.449 22:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.708 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.708 22:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.708 [2024-11-26 22:53:45.734127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.646 "name": "raid_bdev1", 00:09:07.646 "uuid": "ff75a37b-1b84-43b7-8f24-f490ccb4645a", 00:09:07.646 "strip_size_kb": 64, 00:09:07.646 "state": "online", 00:09:07.646 "raid_level": "concat", 00:09:07.646 "superblock": true, 00:09:07.646 "num_base_bdevs": 3, 00:09:07.646 "num_base_bdevs_discovered": 3, 00:09:07.646 "num_base_bdevs_operational": 3, 00:09:07.646 "base_bdevs_list": [ 00:09:07.646 { 00:09:07.646 "name": "BaseBdev1", 00:09:07.646 "uuid": "e06e5201-4ba5-5f9b-87cc-2b49d11755d8", 00:09:07.646 "is_configured": true, 00:09:07.646 "data_offset": 2048, 00:09:07.646 "data_size": 63488 00:09:07.646 }, 00:09:07.646 { 00:09:07.646 "name": "BaseBdev2", 00:09:07.646 "uuid": "623ff178-23d1-5a45-bc55-4beed50779f6", 00:09:07.646 "is_configured": true, 00:09:07.646 "data_offset": 2048, 00:09:07.646 "data_size": 63488 00:09:07.646 }, 00:09:07.646 { 00:09:07.646 "name": "BaseBdev3", 00:09:07.646 "uuid": "42b79834-d7ab-5c28-95bd-9ed5c4c7127b", 00:09:07.646 "is_configured": true, 00:09:07.646 "data_offset": 2048, 00:09:07.646 "data_size": 63488 00:09:07.646 } 00:09:07.646 ] 00:09:07.646 }' 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.646 22:53:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 [2024-11-26 22:53:47.136551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.214 [2024-11-26 22:53:47.136675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.214 [2024-11-26 22:53:47.139240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.214 [2024-11-26 22:53:47.139332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.214 [2024-11-26 22:53:47.139388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.214 [2024-11-26 22:53:47.139427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:08.214 { 00:09:08.214 "results": [ 00:09:08.214 { 00:09:08.214 "job": "raid_bdev1", 00:09:08.214 "core_mask": "0x1", 00:09:08.214 "workload": "randrw", 00:09:08.214 "percentage": 50, 00:09:08.214 "status": "finished", 00:09:08.214 "queue_depth": 1, 00:09:08.214 "io_size": 131072, 00:09:08.214 "runtime": 1.400739, 00:09:08.214 "iops": 17132.385119569026, 00:09:08.214 "mibps": 2141.5481399461282, 00:09:08.214 "io_failed": 1, 00:09:08.214 "io_timeout": 0, 00:09:08.214 "avg_latency_us": 80.40091882296716, 00:09:08.214 "min_latency_us": 24.43301664778175, 00:09:08.214 "max_latency_us": 1356.646038525233 00:09:08.214 } 00:09:08.214 ], 00:09:08.214 "core_count": 1 00:09:08.214 } 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79924 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 79924 ']' 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 79924 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79924 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79924' 00:09:08.214 killing process with pid 79924 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 79924 00:09:08.214 [2024-11-26 22:53:47.187891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.214 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 79924 00:09:08.214 [2024-11-26 22:53:47.212423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0mErthfggv 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.474 ************************************ 00:09:08.474 END TEST raid_write_error_test 00:09:08.474 ************************************ 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:08.474 00:09:08.474 real 0m3.194s 00:09:08.474 user 0m4.002s 00:09:08.474 sys 0m0.519s 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.474 22:53:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.474 22:53:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:08.474 22:53:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:08.474 22:53:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.474 22:53:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.474 22:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.474 ************************************ 00:09:08.474 START TEST raid_state_function_test 00:09:08.474 ************************************ 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80051 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80051' 00:09:08.474 Process raid pid: 80051 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80051 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80051 ']' 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.474 22:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.733 [2024-11-26 22:53:47.605904] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:08.733 [2024-11-26 22:53:47.606111] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.733 [2024-11-26 22:53:47.740461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.733 [2024-11-26 22:53:47.763736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.733 [2024-11-26 22:53:47.788876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.733 [2024-11-26 22:53:47.830181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.733 [2024-11-26 22:53:47.830316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.300 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.300 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:09.300 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.300 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.300 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.559 [2024-11-26 22:53:48.429658] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.559 [2024-11-26 22:53:48.429715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.559 [2024-11-26 22:53:48.429729] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.559 [2024-11-26 22:53:48.429736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.559 [2024-11-26 22:53:48.429746] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.559 [2024-11-26 22:53:48.429753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.559 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.559 "name": "Existed_Raid", 00:09:09.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.559 "strip_size_kb": 0, 00:09:09.559 "state": "configuring", 00:09:09.559 "raid_level": "raid1", 00:09:09.559 "superblock": false, 00:09:09.559 "num_base_bdevs": 3, 00:09:09.560 "num_base_bdevs_discovered": 0, 00:09:09.560 "num_base_bdevs_operational": 3, 00:09:09.560 "base_bdevs_list": [ 00:09:09.560 { 00:09:09.560 "name": "BaseBdev1", 00:09:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.560 "is_configured": false, 00:09:09.560 "data_offset": 0, 00:09:09.560 "data_size": 0 00:09:09.560 }, 00:09:09.560 { 00:09:09.560 "name": "BaseBdev2", 00:09:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.560 "is_configured": false, 00:09:09.560 "data_offset": 0, 00:09:09.560 "data_size": 0 00:09:09.560 }, 00:09:09.560 { 00:09:09.560 "name": "BaseBdev3", 00:09:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.560 "is_configured": false, 00:09:09.560 "data_offset": 0, 00:09:09.560 "data_size": 0 00:09:09.560 } 00:09:09.560 ] 00:09:09.560 }' 00:09:09.560 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.560 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 [2024-11-26 22:53:48.825682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.819 [2024-11-26 22:53:48.825780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 [2024-11-26 22:53:48.837704] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.819 [2024-11-26 22:53:48.837744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.819 [2024-11-26 22:53:48.837755] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.819 [2024-11-26 22:53:48.837778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.819 [2024-11-26 22:53:48.837785] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.819 [2024-11-26 22:53:48.837792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 [2024-11-26 22:53:48.858376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.819 BaseBdev1 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 [ 00:09:09.819 { 00:09:09.819 "name": "BaseBdev1", 00:09:09.819 "aliases": [ 00:09:09.819 "06dae142-bfa1-41ef-b587-6689e118611d" 00:09:09.819 ], 00:09:09.819 "product_name": "Malloc disk", 00:09:09.819 "block_size": 512, 00:09:09.819 "num_blocks": 65536, 00:09:09.819 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:09.819 "assigned_rate_limits": { 00:09:09.819 "rw_ios_per_sec": 0, 00:09:09.819 "rw_mbytes_per_sec": 0, 00:09:09.819 "r_mbytes_per_sec": 0, 00:09:09.819 "w_mbytes_per_sec": 0 00:09:09.819 }, 00:09:09.819 "claimed": true, 00:09:09.819 "claim_type": "exclusive_write", 00:09:09.819 "zoned": false, 00:09:09.819 "supported_io_types": { 00:09:09.819 "read": true, 00:09:09.819 "write": true, 00:09:09.819 "unmap": true, 00:09:09.819 "flush": true, 00:09:09.819 "reset": true, 00:09:09.819 "nvme_admin": false, 00:09:09.819 "nvme_io": false, 00:09:09.819 "nvme_io_md": false, 00:09:09.819 "write_zeroes": true, 00:09:09.819 "zcopy": true, 00:09:09.819 "get_zone_info": false, 00:09:09.819 "zone_management": false, 00:09:09.819 "zone_append": false, 00:09:09.819 "compare": false, 00:09:09.819 "compare_and_write": false, 00:09:09.819 "abort": true, 00:09:09.819 "seek_hole": false, 00:09:09.819 "seek_data": false, 00:09:09.819 "copy": true, 00:09:09.819 "nvme_iov_md": false 00:09:09.819 }, 00:09:09.819 "memory_domains": [ 00:09:09.819 { 00:09:09.819 "dma_device_id": "system", 00:09:09.819 "dma_device_type": 1 00:09:09.819 }, 00:09:09.819 { 00:09:09.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.819 "dma_device_type": 2 00:09:09.819 } 00:09:09.819 ], 00:09:09.819 "driver_specific": {} 00:09:09.819 } 00:09:09.819 ] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.078 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.079 "name": "Existed_Raid", 00:09:10.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.079 "strip_size_kb": 0, 00:09:10.079 "state": "configuring", 00:09:10.079 "raid_level": "raid1", 00:09:10.079 "superblock": false, 00:09:10.079 "num_base_bdevs": 3, 00:09:10.079 "num_base_bdevs_discovered": 1, 00:09:10.079 "num_base_bdevs_operational": 3, 00:09:10.079 "base_bdevs_list": [ 00:09:10.079 { 00:09:10.079 "name": "BaseBdev1", 00:09:10.079 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:10.079 "is_configured": true, 00:09:10.079 "data_offset": 0, 00:09:10.079 "data_size": 65536 00:09:10.079 }, 00:09:10.079 { 00:09:10.079 "name": "BaseBdev2", 00:09:10.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.079 "is_configured": false, 00:09:10.079 "data_offset": 0, 00:09:10.079 "data_size": 0 00:09:10.079 }, 00:09:10.079 { 00:09:10.079 "name": "BaseBdev3", 00:09:10.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.079 "is_configured": false, 00:09:10.079 "data_offset": 0, 00:09:10.079 "data_size": 0 00:09:10.079 } 00:09:10.079 ] 00:09:10.079 }' 00:09:10.079 22:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.079 22:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.339 [2024-11-26 22:53:49.346510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.339 [2024-11-26 22:53:49.346615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.339 [2024-11-26 22:53:49.358548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.339 [2024-11-26 22:53:49.360299] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.339 [2024-11-26 22:53:49.360363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.339 [2024-11-26 22:53:49.360408] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.339 [2024-11-26 22:53:49.360428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.339 "name": "Existed_Raid", 00:09:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.339 "strip_size_kb": 0, 00:09:10.339 "state": "configuring", 00:09:10.339 "raid_level": "raid1", 00:09:10.339 "superblock": false, 00:09:10.339 "num_base_bdevs": 3, 00:09:10.339 "num_base_bdevs_discovered": 1, 00:09:10.339 "num_base_bdevs_operational": 3, 00:09:10.339 "base_bdevs_list": [ 00:09:10.339 { 00:09:10.339 "name": "BaseBdev1", 00:09:10.339 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:10.339 "is_configured": true, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 65536 00:09:10.339 }, 00:09:10.339 { 00:09:10.339 "name": "BaseBdev2", 00:09:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.339 "is_configured": false, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 0 00:09:10.339 }, 00:09:10.339 { 00:09:10.339 "name": "BaseBdev3", 00:09:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.339 "is_configured": false, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 0 00:09:10.339 } 00:09:10.339 ] 00:09:10.339 }' 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.339 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.908 [2024-11-26 22:53:49.817443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.908 BaseBdev2 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.908 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.909 [ 00:09:10.909 { 00:09:10.909 "name": "BaseBdev2", 00:09:10.909 "aliases": [ 00:09:10.909 "1c8a331b-09c0-48b4-b9b0-b290796629d7" 00:09:10.909 ], 00:09:10.909 "product_name": "Malloc disk", 00:09:10.909 "block_size": 512, 00:09:10.909 "num_blocks": 65536, 00:09:10.909 "uuid": "1c8a331b-09c0-48b4-b9b0-b290796629d7", 00:09:10.909 "assigned_rate_limits": { 00:09:10.909 "rw_ios_per_sec": 0, 00:09:10.909 "rw_mbytes_per_sec": 0, 00:09:10.909 "r_mbytes_per_sec": 0, 00:09:10.909 "w_mbytes_per_sec": 0 00:09:10.909 }, 00:09:10.909 "claimed": true, 00:09:10.909 "claim_type": "exclusive_write", 00:09:10.909 "zoned": false, 00:09:10.909 "supported_io_types": { 00:09:10.909 "read": true, 00:09:10.909 "write": true, 00:09:10.909 "unmap": true, 00:09:10.909 "flush": true, 00:09:10.909 "reset": true, 00:09:10.909 "nvme_admin": false, 00:09:10.909 "nvme_io": false, 00:09:10.909 "nvme_io_md": false, 00:09:10.909 "write_zeroes": true, 00:09:10.909 "zcopy": true, 00:09:10.909 "get_zone_info": false, 00:09:10.909 "zone_management": false, 00:09:10.909 "zone_append": false, 00:09:10.909 "compare": false, 00:09:10.909 "compare_and_write": false, 00:09:10.909 "abort": true, 00:09:10.909 "seek_hole": false, 00:09:10.909 "seek_data": false, 00:09:10.909 "copy": true, 00:09:10.909 "nvme_iov_md": false 00:09:10.909 }, 00:09:10.909 "memory_domains": [ 00:09:10.909 { 00:09:10.909 "dma_device_id": "system", 00:09:10.909 "dma_device_type": 1 00:09:10.909 }, 00:09:10.909 { 00:09:10.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.909 "dma_device_type": 2 00:09:10.909 } 00:09:10.909 ], 00:09:10.909 "driver_specific": {} 00:09:10.909 } 00:09:10.909 ] 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.909 "name": "Existed_Raid", 00:09:10.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.909 "strip_size_kb": 0, 00:09:10.909 "state": "configuring", 00:09:10.909 "raid_level": "raid1", 00:09:10.909 "superblock": false, 00:09:10.909 "num_base_bdevs": 3, 00:09:10.909 "num_base_bdevs_discovered": 2, 00:09:10.909 "num_base_bdevs_operational": 3, 00:09:10.909 "base_bdevs_list": [ 00:09:10.909 { 00:09:10.909 "name": "BaseBdev1", 00:09:10.909 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:10.909 "is_configured": true, 00:09:10.909 "data_offset": 0, 00:09:10.909 "data_size": 65536 00:09:10.909 }, 00:09:10.909 { 00:09:10.909 "name": "BaseBdev2", 00:09:10.909 "uuid": "1c8a331b-09c0-48b4-b9b0-b290796629d7", 00:09:10.909 "is_configured": true, 00:09:10.909 "data_offset": 0, 00:09:10.909 "data_size": 65536 00:09:10.909 }, 00:09:10.909 { 00:09:10.909 "name": "BaseBdev3", 00:09:10.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.909 "is_configured": false, 00:09:10.909 "data_offset": 0, 00:09:10.909 "data_size": 0 00:09:10.909 } 00:09:10.909 ] 00:09:10.909 }' 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.909 22:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.170 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.170 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.429 [2024-11-26 22:53:50.301119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.429 [2024-11-26 22:53:50.301262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:11.429 [2024-11-26 22:53:50.301295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:11.429 [2024-11-26 22:53:50.301684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:11.429 [2024-11-26 22:53:50.301913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:11.429 [2024-11-26 22:53:50.301971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:11.429 [2024-11-26 22:53:50.302276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.429 BaseBdev3 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.429 [ 00:09:11.429 { 00:09:11.429 "name": "BaseBdev3", 00:09:11.429 "aliases": [ 00:09:11.429 "3af07863-6d1e-438a-b082-f6f8d51432f7" 00:09:11.429 ], 00:09:11.429 "product_name": "Malloc disk", 00:09:11.429 "block_size": 512, 00:09:11.429 "num_blocks": 65536, 00:09:11.429 "uuid": "3af07863-6d1e-438a-b082-f6f8d51432f7", 00:09:11.429 "assigned_rate_limits": { 00:09:11.429 "rw_ios_per_sec": 0, 00:09:11.429 "rw_mbytes_per_sec": 0, 00:09:11.429 "r_mbytes_per_sec": 0, 00:09:11.429 "w_mbytes_per_sec": 0 00:09:11.429 }, 00:09:11.429 "claimed": true, 00:09:11.429 "claim_type": "exclusive_write", 00:09:11.429 "zoned": false, 00:09:11.429 "supported_io_types": { 00:09:11.429 "read": true, 00:09:11.429 "write": true, 00:09:11.429 "unmap": true, 00:09:11.429 "flush": true, 00:09:11.429 "reset": true, 00:09:11.429 "nvme_admin": false, 00:09:11.429 "nvme_io": false, 00:09:11.429 "nvme_io_md": false, 00:09:11.429 "write_zeroes": true, 00:09:11.429 "zcopy": true, 00:09:11.429 "get_zone_info": false, 00:09:11.429 "zone_management": false, 00:09:11.429 "zone_append": false, 00:09:11.429 "compare": false, 00:09:11.429 "compare_and_write": false, 00:09:11.429 "abort": true, 00:09:11.429 "seek_hole": false, 00:09:11.429 "seek_data": false, 00:09:11.429 "copy": true, 00:09:11.429 "nvme_iov_md": false 00:09:11.429 }, 00:09:11.429 "memory_domains": [ 00:09:11.429 { 00:09:11.429 "dma_device_id": "system", 00:09:11.429 "dma_device_type": 1 00:09:11.429 }, 00:09:11.429 { 00:09:11.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.429 "dma_device_type": 2 00:09:11.429 } 00:09:11.429 ], 00:09:11.429 "driver_specific": {} 00:09:11.429 } 00:09:11.429 ] 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.429 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.430 "name": "Existed_Raid", 00:09:11.430 "uuid": "c60810f6-4b6c-4165-9aa4-4b207b81fcb0", 00:09:11.430 "strip_size_kb": 0, 00:09:11.430 "state": "online", 00:09:11.430 "raid_level": "raid1", 00:09:11.430 "superblock": false, 00:09:11.430 "num_base_bdevs": 3, 00:09:11.430 "num_base_bdevs_discovered": 3, 00:09:11.430 "num_base_bdevs_operational": 3, 00:09:11.430 "base_bdevs_list": [ 00:09:11.430 { 00:09:11.430 "name": "BaseBdev1", 00:09:11.430 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:11.430 "is_configured": true, 00:09:11.430 "data_offset": 0, 00:09:11.430 "data_size": 65536 00:09:11.430 }, 00:09:11.430 { 00:09:11.430 "name": "BaseBdev2", 00:09:11.430 "uuid": "1c8a331b-09c0-48b4-b9b0-b290796629d7", 00:09:11.430 "is_configured": true, 00:09:11.430 "data_offset": 0, 00:09:11.430 "data_size": 65536 00:09:11.430 }, 00:09:11.430 { 00:09:11.430 "name": "BaseBdev3", 00:09:11.430 "uuid": "3af07863-6d1e-438a-b082-f6f8d51432f7", 00:09:11.430 "is_configured": true, 00:09:11.430 "data_offset": 0, 00:09:11.430 "data_size": 65536 00:09:11.430 } 00:09:11.430 ] 00:09:11.430 }' 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.430 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.689 [2024-11-26 22:53:50.749554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.689 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.689 "name": "Existed_Raid", 00:09:11.689 "aliases": [ 00:09:11.689 "c60810f6-4b6c-4165-9aa4-4b207b81fcb0" 00:09:11.689 ], 00:09:11.689 "product_name": "Raid Volume", 00:09:11.689 "block_size": 512, 00:09:11.689 "num_blocks": 65536, 00:09:11.689 "uuid": "c60810f6-4b6c-4165-9aa4-4b207b81fcb0", 00:09:11.689 "assigned_rate_limits": { 00:09:11.689 "rw_ios_per_sec": 0, 00:09:11.689 "rw_mbytes_per_sec": 0, 00:09:11.689 "r_mbytes_per_sec": 0, 00:09:11.689 "w_mbytes_per_sec": 0 00:09:11.689 }, 00:09:11.689 "claimed": false, 00:09:11.689 "zoned": false, 00:09:11.689 "supported_io_types": { 00:09:11.689 "read": true, 00:09:11.689 "write": true, 00:09:11.689 "unmap": false, 00:09:11.689 "flush": false, 00:09:11.689 "reset": true, 00:09:11.689 "nvme_admin": false, 00:09:11.689 "nvme_io": false, 00:09:11.689 "nvme_io_md": false, 00:09:11.689 "write_zeroes": true, 00:09:11.689 "zcopy": false, 00:09:11.689 "get_zone_info": false, 00:09:11.689 "zone_management": false, 00:09:11.689 "zone_append": false, 00:09:11.689 "compare": false, 00:09:11.689 "compare_and_write": false, 00:09:11.689 "abort": false, 00:09:11.689 "seek_hole": false, 00:09:11.689 "seek_data": false, 00:09:11.689 "copy": false, 00:09:11.689 "nvme_iov_md": false 00:09:11.689 }, 00:09:11.689 "memory_domains": [ 00:09:11.689 { 00:09:11.689 "dma_device_id": "system", 00:09:11.689 "dma_device_type": 1 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.689 "dma_device_type": 2 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "dma_device_id": "system", 00:09:11.689 "dma_device_type": 1 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.689 "dma_device_type": 2 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "dma_device_id": "system", 00:09:11.689 "dma_device_type": 1 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.689 "dma_device_type": 2 00:09:11.689 } 00:09:11.689 ], 00:09:11.689 "driver_specific": { 00:09:11.689 "raid": { 00:09:11.689 "uuid": "c60810f6-4b6c-4165-9aa4-4b207b81fcb0", 00:09:11.689 "strip_size_kb": 0, 00:09:11.689 "state": "online", 00:09:11.689 "raid_level": "raid1", 00:09:11.689 "superblock": false, 00:09:11.689 "num_base_bdevs": 3, 00:09:11.689 "num_base_bdevs_discovered": 3, 00:09:11.689 "num_base_bdevs_operational": 3, 00:09:11.689 "base_bdevs_list": [ 00:09:11.689 { 00:09:11.689 "name": "BaseBdev1", 00:09:11.689 "uuid": "06dae142-bfa1-41ef-b587-6689e118611d", 00:09:11.689 "is_configured": true, 00:09:11.689 "data_offset": 0, 00:09:11.689 "data_size": 65536 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "name": "BaseBdev2", 00:09:11.689 "uuid": "1c8a331b-09c0-48b4-b9b0-b290796629d7", 00:09:11.689 "is_configured": true, 00:09:11.689 "data_offset": 0, 00:09:11.690 "data_size": 65536 00:09:11.690 }, 00:09:11.690 { 00:09:11.690 "name": "BaseBdev3", 00:09:11.690 "uuid": "3af07863-6d1e-438a-b082-f6f8d51432f7", 00:09:11.690 "is_configured": true, 00:09:11.690 "data_offset": 0, 00:09:11.690 "data_size": 65536 00:09:11.690 } 00:09:11.690 ] 00:09:11.690 } 00:09:11.690 } 00:09:11.690 }' 00:09:11.690 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.949 BaseBdev2 00:09:11.949 BaseBdev3' 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.949 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.950 22:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.950 [2024-11-26 22:53:51.021399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.950 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.210 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.210 "name": "Existed_Raid", 00:09:12.210 "uuid": "c60810f6-4b6c-4165-9aa4-4b207b81fcb0", 00:09:12.210 "strip_size_kb": 0, 00:09:12.210 "state": "online", 00:09:12.210 "raid_level": "raid1", 00:09:12.210 "superblock": false, 00:09:12.210 "num_base_bdevs": 3, 00:09:12.210 "num_base_bdevs_discovered": 2, 00:09:12.210 "num_base_bdevs_operational": 2, 00:09:12.210 "base_bdevs_list": [ 00:09:12.210 { 00:09:12.210 "name": null, 00:09:12.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.210 "is_configured": false, 00:09:12.210 "data_offset": 0, 00:09:12.210 "data_size": 65536 00:09:12.210 }, 00:09:12.210 { 00:09:12.210 "name": "BaseBdev2", 00:09:12.210 "uuid": "1c8a331b-09c0-48b4-b9b0-b290796629d7", 00:09:12.210 "is_configured": true, 00:09:12.210 "data_offset": 0, 00:09:12.210 "data_size": 65536 00:09:12.210 }, 00:09:12.210 { 00:09:12.210 "name": "BaseBdev3", 00:09:12.210 "uuid": "3af07863-6d1e-438a-b082-f6f8d51432f7", 00:09:12.210 "is_configured": true, 00:09:12.210 "data_offset": 0, 00:09:12.210 "data_size": 65536 00:09:12.210 } 00:09:12.210 ] 00:09:12.210 }' 00:09:12.210 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.210 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 [2024-11-26 22:53:51.472582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.470 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 [2024-11-26 22:53:51.535581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.470 [2024-11-26 22:53:51.535710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.470 [2024-11-26 22:53:51.546757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.470 [2024-11-26 22:53:51.546866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.470 [2024-11-26 22:53:51.546906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.471 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 BaseBdev2 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 [ 00:09:12.731 { 00:09:12.731 "name": "BaseBdev2", 00:09:12.731 "aliases": [ 00:09:12.731 "2c96f3e9-d65f-44b1-ae4f-efadd9d03771" 00:09:12.731 ], 00:09:12.731 "product_name": "Malloc disk", 00:09:12.731 "block_size": 512, 00:09:12.731 "num_blocks": 65536, 00:09:12.731 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:12.731 "assigned_rate_limits": { 00:09:12.731 "rw_ios_per_sec": 0, 00:09:12.731 "rw_mbytes_per_sec": 0, 00:09:12.731 "r_mbytes_per_sec": 0, 00:09:12.731 "w_mbytes_per_sec": 0 00:09:12.731 }, 00:09:12.731 "claimed": false, 00:09:12.731 "zoned": false, 00:09:12.731 "supported_io_types": { 00:09:12.731 "read": true, 00:09:12.731 "write": true, 00:09:12.731 "unmap": true, 00:09:12.731 "flush": true, 00:09:12.731 "reset": true, 00:09:12.731 "nvme_admin": false, 00:09:12.731 "nvme_io": false, 00:09:12.731 "nvme_io_md": false, 00:09:12.731 "write_zeroes": true, 00:09:12.731 "zcopy": true, 00:09:12.731 "get_zone_info": false, 00:09:12.731 "zone_management": false, 00:09:12.731 "zone_append": false, 00:09:12.731 "compare": false, 00:09:12.731 "compare_and_write": false, 00:09:12.731 "abort": true, 00:09:12.731 "seek_hole": false, 00:09:12.731 "seek_data": false, 00:09:12.731 "copy": true, 00:09:12.731 "nvme_iov_md": false 00:09:12.731 }, 00:09:12.731 "memory_domains": [ 00:09:12.731 { 00:09:12.731 "dma_device_id": "system", 00:09:12.731 "dma_device_type": 1 00:09:12.731 }, 00:09:12.731 { 00:09:12.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.731 "dma_device_type": 2 00:09:12.731 } 00:09:12.731 ], 00:09:12.731 "driver_specific": {} 00:09:12.731 } 00:09:12.731 ] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 BaseBdev3 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.731 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.732 [ 00:09:12.732 { 00:09:12.732 "name": "BaseBdev3", 00:09:12.732 "aliases": [ 00:09:12.732 "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d" 00:09:12.732 ], 00:09:12.732 "product_name": "Malloc disk", 00:09:12.732 "block_size": 512, 00:09:12.732 "num_blocks": 65536, 00:09:12.732 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:12.732 "assigned_rate_limits": { 00:09:12.732 "rw_ios_per_sec": 0, 00:09:12.732 "rw_mbytes_per_sec": 0, 00:09:12.732 "r_mbytes_per_sec": 0, 00:09:12.732 "w_mbytes_per_sec": 0 00:09:12.732 }, 00:09:12.732 "claimed": false, 00:09:12.732 "zoned": false, 00:09:12.732 "supported_io_types": { 00:09:12.732 "read": true, 00:09:12.732 "write": true, 00:09:12.732 "unmap": true, 00:09:12.732 "flush": true, 00:09:12.732 "reset": true, 00:09:12.732 "nvme_admin": false, 00:09:12.732 "nvme_io": false, 00:09:12.732 "nvme_io_md": false, 00:09:12.732 "write_zeroes": true, 00:09:12.732 "zcopy": true, 00:09:12.732 "get_zone_info": false, 00:09:12.732 "zone_management": false, 00:09:12.732 "zone_append": false, 00:09:12.732 "compare": false, 00:09:12.732 "compare_and_write": false, 00:09:12.732 "abort": true, 00:09:12.732 "seek_hole": false, 00:09:12.732 "seek_data": false, 00:09:12.732 "copy": true, 00:09:12.732 "nvme_iov_md": false 00:09:12.732 }, 00:09:12.732 "memory_domains": [ 00:09:12.732 { 00:09:12.732 "dma_device_id": "system", 00:09:12.732 "dma_device_type": 1 00:09:12.732 }, 00:09:12.732 { 00:09:12.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.732 "dma_device_type": 2 00:09:12.732 } 00:09:12.732 ], 00:09:12.732 "driver_specific": {} 00:09:12.732 } 00:09:12.732 ] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.732 [2024-11-26 22:53:51.697394] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.732 [2024-11-26 22:53:51.697525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.732 [2024-11-26 22:53:51.697577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.732 [2024-11-26 22:53:51.699323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.732 "name": "Existed_Raid", 00:09:12.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.732 "strip_size_kb": 0, 00:09:12.732 "state": "configuring", 00:09:12.732 "raid_level": "raid1", 00:09:12.732 "superblock": false, 00:09:12.732 "num_base_bdevs": 3, 00:09:12.732 "num_base_bdevs_discovered": 2, 00:09:12.732 "num_base_bdevs_operational": 3, 00:09:12.732 "base_bdevs_list": [ 00:09:12.732 { 00:09:12.732 "name": "BaseBdev1", 00:09:12.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.732 "is_configured": false, 00:09:12.732 "data_offset": 0, 00:09:12.732 "data_size": 0 00:09:12.732 }, 00:09:12.732 { 00:09:12.732 "name": "BaseBdev2", 00:09:12.732 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:12.732 "is_configured": true, 00:09:12.732 "data_offset": 0, 00:09:12.732 "data_size": 65536 00:09:12.732 }, 00:09:12.732 { 00:09:12.732 "name": "BaseBdev3", 00:09:12.732 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:12.732 "is_configured": true, 00:09:12.732 "data_offset": 0, 00:09:12.732 "data_size": 65536 00:09:12.732 } 00:09:12.732 ] 00:09:12.732 }' 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.732 22:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.992 [2024-11-26 22:53:52.093493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.992 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.251 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.251 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.251 "name": "Existed_Raid", 00:09:13.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.251 "strip_size_kb": 0, 00:09:13.251 "state": "configuring", 00:09:13.251 "raid_level": "raid1", 00:09:13.251 "superblock": false, 00:09:13.251 "num_base_bdevs": 3, 00:09:13.251 "num_base_bdevs_discovered": 1, 00:09:13.251 "num_base_bdevs_operational": 3, 00:09:13.251 "base_bdevs_list": [ 00:09:13.251 { 00:09:13.251 "name": "BaseBdev1", 00:09:13.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.251 "is_configured": false, 00:09:13.251 "data_offset": 0, 00:09:13.251 "data_size": 0 00:09:13.251 }, 00:09:13.251 { 00:09:13.251 "name": null, 00:09:13.251 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:13.251 "is_configured": false, 00:09:13.251 "data_offset": 0, 00:09:13.251 "data_size": 65536 00:09:13.251 }, 00:09:13.251 { 00:09:13.251 "name": "BaseBdev3", 00:09:13.251 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:13.251 "is_configured": true, 00:09:13.251 "data_offset": 0, 00:09:13.251 "data_size": 65536 00:09:13.251 } 00:09:13.251 ] 00:09:13.251 }' 00:09:13.251 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.251 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.511 [2024-11-26 22:53:52.540318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.511 BaseBdev1 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.511 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.511 [ 00:09:13.511 { 00:09:13.511 "name": "BaseBdev1", 00:09:13.511 "aliases": [ 00:09:13.511 "a9878bc0-319e-438a-8677-4b97deb76b84" 00:09:13.511 ], 00:09:13.511 "product_name": "Malloc disk", 00:09:13.511 "block_size": 512, 00:09:13.511 "num_blocks": 65536, 00:09:13.511 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:13.511 "assigned_rate_limits": { 00:09:13.511 "rw_ios_per_sec": 0, 00:09:13.511 "rw_mbytes_per_sec": 0, 00:09:13.511 "r_mbytes_per_sec": 0, 00:09:13.511 "w_mbytes_per_sec": 0 00:09:13.511 }, 00:09:13.511 "claimed": true, 00:09:13.511 "claim_type": "exclusive_write", 00:09:13.511 "zoned": false, 00:09:13.511 "supported_io_types": { 00:09:13.512 "read": true, 00:09:13.512 "write": true, 00:09:13.512 "unmap": true, 00:09:13.512 "flush": true, 00:09:13.512 "reset": true, 00:09:13.512 "nvme_admin": false, 00:09:13.512 "nvme_io": false, 00:09:13.512 "nvme_io_md": false, 00:09:13.512 "write_zeroes": true, 00:09:13.512 "zcopy": true, 00:09:13.512 "get_zone_info": false, 00:09:13.512 "zone_management": false, 00:09:13.512 "zone_append": false, 00:09:13.512 "compare": false, 00:09:13.512 "compare_and_write": false, 00:09:13.512 "abort": true, 00:09:13.512 "seek_hole": false, 00:09:13.512 "seek_data": false, 00:09:13.512 "copy": true, 00:09:13.512 "nvme_iov_md": false 00:09:13.512 }, 00:09:13.512 "memory_domains": [ 00:09:13.512 { 00:09:13.512 "dma_device_id": "system", 00:09:13.512 "dma_device_type": 1 00:09:13.512 }, 00:09:13.512 { 00:09:13.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.512 "dma_device_type": 2 00:09:13.512 } 00:09:13.512 ], 00:09:13.512 "driver_specific": {} 00:09:13.512 } 00:09:13.512 ] 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.512 "name": "Existed_Raid", 00:09:13.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.512 "strip_size_kb": 0, 00:09:13.512 "state": "configuring", 00:09:13.512 "raid_level": "raid1", 00:09:13.512 "superblock": false, 00:09:13.512 "num_base_bdevs": 3, 00:09:13.512 "num_base_bdevs_discovered": 2, 00:09:13.512 "num_base_bdevs_operational": 3, 00:09:13.512 "base_bdevs_list": [ 00:09:13.512 { 00:09:13.512 "name": "BaseBdev1", 00:09:13.512 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:13.512 "is_configured": true, 00:09:13.512 "data_offset": 0, 00:09:13.512 "data_size": 65536 00:09:13.512 }, 00:09:13.512 { 00:09:13.512 "name": null, 00:09:13.512 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:13.512 "is_configured": false, 00:09:13.512 "data_offset": 0, 00:09:13.512 "data_size": 65536 00:09:13.512 }, 00:09:13.512 { 00:09:13.512 "name": "BaseBdev3", 00:09:13.512 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:13.512 "is_configured": true, 00:09:13.512 "data_offset": 0, 00:09:13.512 "data_size": 65536 00:09:13.512 } 00:09:13.512 ] 00:09:13.512 }' 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.512 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.081 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.081 22:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.081 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.081 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.081 22:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.081 [2024-11-26 22:53:53.024494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.081 "name": "Existed_Raid", 00:09:14.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.081 "strip_size_kb": 0, 00:09:14.081 "state": "configuring", 00:09:14.081 "raid_level": "raid1", 00:09:14.081 "superblock": false, 00:09:14.081 "num_base_bdevs": 3, 00:09:14.081 "num_base_bdevs_discovered": 1, 00:09:14.081 "num_base_bdevs_operational": 3, 00:09:14.081 "base_bdevs_list": [ 00:09:14.081 { 00:09:14.081 "name": "BaseBdev1", 00:09:14.081 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:14.081 "is_configured": true, 00:09:14.081 "data_offset": 0, 00:09:14.081 "data_size": 65536 00:09:14.081 }, 00:09:14.081 { 00:09:14.081 "name": null, 00:09:14.081 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:14.081 "is_configured": false, 00:09:14.081 "data_offset": 0, 00:09:14.081 "data_size": 65536 00:09:14.081 }, 00:09:14.081 { 00:09:14.081 "name": null, 00:09:14.081 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:14.081 "is_configured": false, 00:09:14.081 "data_offset": 0, 00:09:14.081 "data_size": 65536 00:09:14.081 } 00:09:14.081 ] 00:09:14.081 }' 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.081 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.341 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.341 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.602 [2024-11-26 22:53:53.512648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.602 "name": "Existed_Raid", 00:09:14.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.602 "strip_size_kb": 0, 00:09:14.602 "state": "configuring", 00:09:14.602 "raid_level": "raid1", 00:09:14.602 "superblock": false, 00:09:14.602 "num_base_bdevs": 3, 00:09:14.602 "num_base_bdevs_discovered": 2, 00:09:14.602 "num_base_bdevs_operational": 3, 00:09:14.602 "base_bdevs_list": [ 00:09:14.602 { 00:09:14.602 "name": "BaseBdev1", 00:09:14.602 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:14.602 "is_configured": true, 00:09:14.602 "data_offset": 0, 00:09:14.602 "data_size": 65536 00:09:14.602 }, 00:09:14.602 { 00:09:14.602 "name": null, 00:09:14.602 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:14.602 "is_configured": false, 00:09:14.602 "data_offset": 0, 00:09:14.602 "data_size": 65536 00:09:14.602 }, 00:09:14.602 { 00:09:14.602 "name": "BaseBdev3", 00:09:14.602 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:14.602 "is_configured": true, 00:09:14.602 "data_offset": 0, 00:09:14.602 "data_size": 65536 00:09:14.602 } 00:09:14.602 ] 00:09:14.602 }' 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.602 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.862 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.862 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.862 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.862 22:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.862 22:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.122 [2024-11-26 22:53:54.016802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.122 "name": "Existed_Raid", 00:09:15.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.122 "strip_size_kb": 0, 00:09:15.122 "state": "configuring", 00:09:15.122 "raid_level": "raid1", 00:09:15.122 "superblock": false, 00:09:15.122 "num_base_bdevs": 3, 00:09:15.122 "num_base_bdevs_discovered": 1, 00:09:15.122 "num_base_bdevs_operational": 3, 00:09:15.122 "base_bdevs_list": [ 00:09:15.122 { 00:09:15.122 "name": null, 00:09:15.122 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:15.122 "is_configured": false, 00:09:15.122 "data_offset": 0, 00:09:15.122 "data_size": 65536 00:09:15.122 }, 00:09:15.122 { 00:09:15.122 "name": null, 00:09:15.122 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:15.122 "is_configured": false, 00:09:15.122 "data_offset": 0, 00:09:15.122 "data_size": 65536 00:09:15.122 }, 00:09:15.122 { 00:09:15.122 "name": "BaseBdev3", 00:09:15.122 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:15.122 "is_configured": true, 00:09:15.122 "data_offset": 0, 00:09:15.122 "data_size": 65536 00:09:15.122 } 00:09:15.122 ] 00:09:15.122 }' 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.122 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.382 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.382 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.382 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.382 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.382 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.642 [2024-11-26 22:53:54.531279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.642 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.642 "name": "Existed_Raid", 00:09:15.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.642 "strip_size_kb": 0, 00:09:15.642 "state": "configuring", 00:09:15.642 "raid_level": "raid1", 00:09:15.642 "superblock": false, 00:09:15.642 "num_base_bdevs": 3, 00:09:15.642 "num_base_bdevs_discovered": 2, 00:09:15.642 "num_base_bdevs_operational": 3, 00:09:15.642 "base_bdevs_list": [ 00:09:15.642 { 00:09:15.642 "name": null, 00:09:15.642 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:15.642 "is_configured": false, 00:09:15.642 "data_offset": 0, 00:09:15.642 "data_size": 65536 00:09:15.642 }, 00:09:15.642 { 00:09:15.642 "name": "BaseBdev2", 00:09:15.642 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:15.642 "is_configured": true, 00:09:15.642 "data_offset": 0, 00:09:15.642 "data_size": 65536 00:09:15.642 }, 00:09:15.642 { 00:09:15.642 "name": "BaseBdev3", 00:09:15.642 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:15.642 "is_configured": true, 00:09:15.642 "data_offset": 0, 00:09:15.643 "data_size": 65536 00:09:15.643 } 00:09:15.643 ] 00:09:15.643 }' 00:09:15.643 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.643 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.903 22:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.903 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a9878bc0-319e-438a-8677-4b97deb76b84 00:09:15.903 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.903 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.903 [2024-11-26 22:53:55.026163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.903 [2024-11-26 22:53:55.026304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.903 [2024-11-26 22:53:55.026334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.903 [2024-11-26 22:53:55.026596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:15.903 [2024-11-26 22:53:55.026754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.903 [2024-11-26 22:53:55.026789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.903 [2024-11-26 22:53:55.026997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.903 NewBaseBdev 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.163 [ 00:09:16.163 { 00:09:16.163 "name": "NewBaseBdev", 00:09:16.163 "aliases": [ 00:09:16.163 "a9878bc0-319e-438a-8677-4b97deb76b84" 00:09:16.163 ], 00:09:16.163 "product_name": "Malloc disk", 00:09:16.163 "block_size": 512, 00:09:16.163 "num_blocks": 65536, 00:09:16.163 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:16.163 "assigned_rate_limits": { 00:09:16.163 "rw_ios_per_sec": 0, 00:09:16.163 "rw_mbytes_per_sec": 0, 00:09:16.163 "r_mbytes_per_sec": 0, 00:09:16.163 "w_mbytes_per_sec": 0 00:09:16.163 }, 00:09:16.163 "claimed": true, 00:09:16.163 "claim_type": "exclusive_write", 00:09:16.163 "zoned": false, 00:09:16.163 "supported_io_types": { 00:09:16.163 "read": true, 00:09:16.163 "write": true, 00:09:16.163 "unmap": true, 00:09:16.163 "flush": true, 00:09:16.163 "reset": true, 00:09:16.163 "nvme_admin": false, 00:09:16.163 "nvme_io": false, 00:09:16.163 "nvme_io_md": false, 00:09:16.163 "write_zeroes": true, 00:09:16.163 "zcopy": true, 00:09:16.163 "get_zone_info": false, 00:09:16.163 "zone_management": false, 00:09:16.163 "zone_append": false, 00:09:16.163 "compare": false, 00:09:16.163 "compare_and_write": false, 00:09:16.163 "abort": true, 00:09:16.163 "seek_hole": false, 00:09:16.163 "seek_data": false, 00:09:16.163 "copy": true, 00:09:16.163 "nvme_iov_md": false 00:09:16.163 }, 00:09:16.163 "memory_domains": [ 00:09:16.163 { 00:09:16.163 "dma_device_id": "system", 00:09:16.163 "dma_device_type": 1 00:09:16.163 }, 00:09:16.163 { 00:09:16.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.163 "dma_device_type": 2 00:09:16.163 } 00:09:16.163 ], 00:09:16.163 "driver_specific": {} 00:09:16.163 } 00:09:16.163 ] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.163 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.163 "name": "Existed_Raid", 00:09:16.163 "uuid": "b8d6beb9-bcbd-4ddc-a2e3-d805128c1bd5", 00:09:16.163 "strip_size_kb": 0, 00:09:16.163 "state": "online", 00:09:16.163 "raid_level": "raid1", 00:09:16.163 "superblock": false, 00:09:16.163 "num_base_bdevs": 3, 00:09:16.163 "num_base_bdevs_discovered": 3, 00:09:16.163 "num_base_bdevs_operational": 3, 00:09:16.164 "base_bdevs_list": [ 00:09:16.164 { 00:09:16.164 "name": "NewBaseBdev", 00:09:16.164 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:16.164 "is_configured": true, 00:09:16.164 "data_offset": 0, 00:09:16.164 "data_size": 65536 00:09:16.164 }, 00:09:16.164 { 00:09:16.164 "name": "BaseBdev2", 00:09:16.164 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:16.164 "is_configured": true, 00:09:16.164 "data_offset": 0, 00:09:16.164 "data_size": 65536 00:09:16.164 }, 00:09:16.164 { 00:09:16.164 "name": "BaseBdev3", 00:09:16.164 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:16.164 "is_configured": true, 00:09:16.164 "data_offset": 0, 00:09:16.164 "data_size": 65536 00:09:16.164 } 00:09:16.164 ] 00:09:16.164 }' 00:09:16.164 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.164 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.424 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.425 [2024-11-26 22:53:55.458627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.425 "name": "Existed_Raid", 00:09:16.425 "aliases": [ 00:09:16.425 "b8d6beb9-bcbd-4ddc-a2e3-d805128c1bd5" 00:09:16.425 ], 00:09:16.425 "product_name": "Raid Volume", 00:09:16.425 "block_size": 512, 00:09:16.425 "num_blocks": 65536, 00:09:16.425 "uuid": "b8d6beb9-bcbd-4ddc-a2e3-d805128c1bd5", 00:09:16.425 "assigned_rate_limits": { 00:09:16.425 "rw_ios_per_sec": 0, 00:09:16.425 "rw_mbytes_per_sec": 0, 00:09:16.425 "r_mbytes_per_sec": 0, 00:09:16.425 "w_mbytes_per_sec": 0 00:09:16.425 }, 00:09:16.425 "claimed": false, 00:09:16.425 "zoned": false, 00:09:16.425 "supported_io_types": { 00:09:16.425 "read": true, 00:09:16.425 "write": true, 00:09:16.425 "unmap": false, 00:09:16.425 "flush": false, 00:09:16.425 "reset": true, 00:09:16.425 "nvme_admin": false, 00:09:16.425 "nvme_io": false, 00:09:16.425 "nvme_io_md": false, 00:09:16.425 "write_zeroes": true, 00:09:16.425 "zcopy": false, 00:09:16.425 "get_zone_info": false, 00:09:16.425 "zone_management": false, 00:09:16.425 "zone_append": false, 00:09:16.425 "compare": false, 00:09:16.425 "compare_and_write": false, 00:09:16.425 "abort": false, 00:09:16.425 "seek_hole": false, 00:09:16.425 "seek_data": false, 00:09:16.425 "copy": false, 00:09:16.425 "nvme_iov_md": false 00:09:16.425 }, 00:09:16.425 "memory_domains": [ 00:09:16.425 { 00:09:16.425 "dma_device_id": "system", 00:09:16.425 "dma_device_type": 1 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.425 "dma_device_type": 2 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "dma_device_id": "system", 00:09:16.425 "dma_device_type": 1 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.425 "dma_device_type": 2 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "dma_device_id": "system", 00:09:16.425 "dma_device_type": 1 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.425 "dma_device_type": 2 00:09:16.425 } 00:09:16.425 ], 00:09:16.425 "driver_specific": { 00:09:16.425 "raid": { 00:09:16.425 "uuid": "b8d6beb9-bcbd-4ddc-a2e3-d805128c1bd5", 00:09:16.425 "strip_size_kb": 0, 00:09:16.425 "state": "online", 00:09:16.425 "raid_level": "raid1", 00:09:16.425 "superblock": false, 00:09:16.425 "num_base_bdevs": 3, 00:09:16.425 "num_base_bdevs_discovered": 3, 00:09:16.425 "num_base_bdevs_operational": 3, 00:09:16.425 "base_bdevs_list": [ 00:09:16.425 { 00:09:16.425 "name": "NewBaseBdev", 00:09:16.425 "uuid": "a9878bc0-319e-438a-8677-4b97deb76b84", 00:09:16.425 "is_configured": true, 00:09:16.425 "data_offset": 0, 00:09:16.425 "data_size": 65536 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "name": "BaseBdev2", 00:09:16.425 "uuid": "2c96f3e9-d65f-44b1-ae4f-efadd9d03771", 00:09:16.425 "is_configured": true, 00:09:16.425 "data_offset": 0, 00:09:16.425 "data_size": 65536 00:09:16.425 }, 00:09:16.425 { 00:09:16.425 "name": "BaseBdev3", 00:09:16.425 "uuid": "75d32b7e-e7e6-41a8-ac5b-7d00b7de756d", 00:09:16.425 "is_configured": true, 00:09:16.425 "data_offset": 0, 00:09:16.425 "data_size": 65536 00:09:16.425 } 00:09:16.425 ] 00:09:16.425 } 00:09:16.425 } 00:09:16.425 }' 00:09:16.425 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.685 BaseBdev2 00:09:16.685 BaseBdev3' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 [2024-11-26 22:53:55.746420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.685 [2024-11-26 22:53:55.746444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.685 [2024-11-26 22:53:55.746505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.685 [2024-11-26 22:53:55.746732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.685 [2024-11-26 22:53:55.746743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80051 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80051 ']' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80051 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80051 00:09:16.685 killing process with pid 80051 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80051' 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80051 00:09:16.685 [2024-11-26 22:53:55.792884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.685 22:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80051 00:09:16.945 [2024-11-26 22:53:55.823273] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.946 22:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.946 00:09:16.946 real 0m8.534s 00:09:16.946 user 0m14.539s 00:09:16.946 sys 0m1.761s 00:09:16.946 22:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.946 22:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.946 ************************************ 00:09:16.946 END TEST raid_state_function_test 00:09:16.946 ************************************ 00:09:17.206 22:53:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:17.206 22:53:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.206 22:53:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.206 22:53:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.206 ************************************ 00:09:17.206 START TEST raid_state_function_test_sb 00:09:17.206 ************************************ 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80656 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80656' 00:09:17.206 Process raid pid: 80656 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80656 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80656 ']' 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.206 22:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.206 [2024-11-26 22:53:56.209638] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:17.206 [2024-11-26 22:53:56.209751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.466 [2024-11-26 22:53:56.345833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:17.466 [2024-11-26 22:53:56.381059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.466 [2024-11-26 22:53:56.405798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.466 [2024-11-26 22:53:56.446915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.466 [2024-11-26 22:53:56.446950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.037 [2024-11-26 22:53:57.026185] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.037 [2024-11-26 22:53:57.026343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.037 [2024-11-26 22:53:57.026376] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.037 [2024-11-26 22:53:57.026397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.037 [2024-11-26 22:53:57.026421] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.037 [2024-11-26 22:53:57.026439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.037 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.038 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.038 "name": "Existed_Raid", 00:09:18.038 "uuid": "37868513-6e38-465d-aaac-6cb437971c8f", 00:09:18.038 "strip_size_kb": 0, 00:09:18.038 "state": "configuring", 00:09:18.038 "raid_level": "raid1", 00:09:18.038 "superblock": true, 00:09:18.038 "num_base_bdevs": 3, 00:09:18.038 "num_base_bdevs_discovered": 0, 00:09:18.038 "num_base_bdevs_operational": 3, 00:09:18.038 "base_bdevs_list": [ 00:09:18.038 { 00:09:18.038 "name": "BaseBdev1", 00:09:18.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.038 "is_configured": false, 00:09:18.038 "data_offset": 0, 00:09:18.038 "data_size": 0 00:09:18.038 }, 00:09:18.038 { 00:09:18.038 "name": "BaseBdev2", 00:09:18.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.038 "is_configured": false, 00:09:18.038 "data_offset": 0, 00:09:18.038 "data_size": 0 00:09:18.038 }, 00:09:18.038 { 00:09:18.038 "name": "BaseBdev3", 00:09:18.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.038 "is_configured": false, 00:09:18.038 "data_offset": 0, 00:09:18.038 "data_size": 0 00:09:18.038 } 00:09:18.038 ] 00:09:18.038 }' 00:09:18.038 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.038 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.608 [2024-11-26 22:53:57.486243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.608 [2024-11-26 22:53:57.486290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.608 [2024-11-26 22:53:57.498280] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.608 [2024-11-26 22:53:57.498359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.608 [2024-11-26 22:53:57.498387] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.608 [2024-11-26 22:53:57.498407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.608 [2024-11-26 22:53:57.498425] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.608 [2024-11-26 22:53:57.498443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.608 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.608 BaseBdev1 00:09:18.608 [2024-11-26 22:53:57.518986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.609 [ 00:09:18.609 { 00:09:18.609 "name": "BaseBdev1", 00:09:18.609 "aliases": [ 00:09:18.609 "7a3648f9-2f54-4646-9cfc-5989af165f18" 00:09:18.609 ], 00:09:18.609 "product_name": "Malloc disk", 00:09:18.609 "block_size": 512, 00:09:18.609 "num_blocks": 65536, 00:09:18.609 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:18.609 "assigned_rate_limits": { 00:09:18.609 "rw_ios_per_sec": 0, 00:09:18.609 "rw_mbytes_per_sec": 0, 00:09:18.609 "r_mbytes_per_sec": 0, 00:09:18.609 "w_mbytes_per_sec": 0 00:09:18.609 }, 00:09:18.609 "claimed": true, 00:09:18.609 "claim_type": "exclusive_write", 00:09:18.609 "zoned": false, 00:09:18.609 "supported_io_types": { 00:09:18.609 "read": true, 00:09:18.609 "write": true, 00:09:18.609 "unmap": true, 00:09:18.609 "flush": true, 00:09:18.609 "reset": true, 00:09:18.609 "nvme_admin": false, 00:09:18.609 "nvme_io": false, 00:09:18.609 "nvme_io_md": false, 00:09:18.609 "write_zeroes": true, 00:09:18.609 "zcopy": true, 00:09:18.609 "get_zone_info": false, 00:09:18.609 "zone_management": false, 00:09:18.609 "zone_append": false, 00:09:18.609 "compare": false, 00:09:18.609 "compare_and_write": false, 00:09:18.609 "abort": true, 00:09:18.609 "seek_hole": false, 00:09:18.609 "seek_data": false, 00:09:18.609 "copy": true, 00:09:18.609 "nvme_iov_md": false 00:09:18.609 }, 00:09:18.609 "memory_domains": [ 00:09:18.609 { 00:09:18.609 "dma_device_id": "system", 00:09:18.609 "dma_device_type": 1 00:09:18.609 }, 00:09:18.609 { 00:09:18.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.609 "dma_device_type": 2 00:09:18.609 } 00:09:18.609 ], 00:09:18.609 "driver_specific": {} 00:09:18.609 } 00:09:18.609 ] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.609 "name": "Existed_Raid", 00:09:18.609 "uuid": "6e1ee048-df6d-49af-b044-87285ad697c3", 00:09:18.609 "strip_size_kb": 0, 00:09:18.609 "state": "configuring", 00:09:18.609 "raid_level": "raid1", 00:09:18.609 "superblock": true, 00:09:18.609 "num_base_bdevs": 3, 00:09:18.609 "num_base_bdevs_discovered": 1, 00:09:18.609 "num_base_bdevs_operational": 3, 00:09:18.609 "base_bdevs_list": [ 00:09:18.609 { 00:09:18.609 "name": "BaseBdev1", 00:09:18.609 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:18.609 "is_configured": true, 00:09:18.609 "data_offset": 2048, 00:09:18.609 "data_size": 63488 00:09:18.609 }, 00:09:18.609 { 00:09:18.609 "name": "BaseBdev2", 00:09:18.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.609 "is_configured": false, 00:09:18.609 "data_offset": 0, 00:09:18.609 "data_size": 0 00:09:18.609 }, 00:09:18.609 { 00:09:18.609 "name": "BaseBdev3", 00:09:18.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.609 "is_configured": false, 00:09:18.609 "data_offset": 0, 00:09:18.609 "data_size": 0 00:09:18.609 } 00:09:18.609 ] 00:09:18.609 }' 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.609 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.870 [2024-11-26 22:53:57.915125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.870 [2024-11-26 22:53:57.915244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.870 [2024-11-26 22:53:57.927172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.870 [2024-11-26 22:53:57.928982] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.870 [2024-11-26 22:53:57.929055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.870 [2024-11-26 22:53:57.929087] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.870 [2024-11-26 22:53:57.929109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.870 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.870 "name": "Existed_Raid", 00:09:18.870 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:18.870 "strip_size_kb": 0, 00:09:18.870 "state": "configuring", 00:09:18.870 "raid_level": "raid1", 00:09:18.870 "superblock": true, 00:09:18.870 "num_base_bdevs": 3, 00:09:18.870 "num_base_bdevs_discovered": 1, 00:09:18.870 "num_base_bdevs_operational": 3, 00:09:18.871 "base_bdevs_list": [ 00:09:18.871 { 00:09:18.871 "name": "BaseBdev1", 00:09:18.871 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:18.871 "is_configured": true, 00:09:18.871 "data_offset": 2048, 00:09:18.871 "data_size": 63488 00:09:18.871 }, 00:09:18.871 { 00:09:18.871 "name": "BaseBdev2", 00:09:18.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.871 "is_configured": false, 00:09:18.871 "data_offset": 0, 00:09:18.871 "data_size": 0 00:09:18.871 }, 00:09:18.871 { 00:09:18.871 "name": "BaseBdev3", 00:09:18.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.871 "is_configured": false, 00:09:18.871 "data_offset": 0, 00:09:18.871 "data_size": 0 00:09:18.871 } 00:09:18.871 ] 00:09:18.871 }' 00:09:18.871 22:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.871 22:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.442 [2024-11-26 22:53:58.414143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.442 BaseBdev2 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.442 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.443 [ 00:09:19.443 { 00:09:19.443 "name": "BaseBdev2", 00:09:19.443 "aliases": [ 00:09:19.443 "b5f45067-92c7-47c3-b4cf-e2387b2ab820" 00:09:19.443 ], 00:09:19.443 "product_name": "Malloc disk", 00:09:19.443 "block_size": 512, 00:09:19.443 "num_blocks": 65536, 00:09:19.443 "uuid": "b5f45067-92c7-47c3-b4cf-e2387b2ab820", 00:09:19.443 "assigned_rate_limits": { 00:09:19.443 "rw_ios_per_sec": 0, 00:09:19.443 "rw_mbytes_per_sec": 0, 00:09:19.443 "r_mbytes_per_sec": 0, 00:09:19.443 "w_mbytes_per_sec": 0 00:09:19.443 }, 00:09:19.443 "claimed": true, 00:09:19.443 "claim_type": "exclusive_write", 00:09:19.443 "zoned": false, 00:09:19.443 "supported_io_types": { 00:09:19.443 "read": true, 00:09:19.443 "write": true, 00:09:19.443 "unmap": true, 00:09:19.443 "flush": true, 00:09:19.443 "reset": true, 00:09:19.443 "nvme_admin": false, 00:09:19.443 "nvme_io": false, 00:09:19.443 "nvme_io_md": false, 00:09:19.443 "write_zeroes": true, 00:09:19.443 "zcopy": true, 00:09:19.443 "get_zone_info": false, 00:09:19.443 "zone_management": false, 00:09:19.443 "zone_append": false, 00:09:19.443 "compare": false, 00:09:19.443 "compare_and_write": false, 00:09:19.443 "abort": true, 00:09:19.443 "seek_hole": false, 00:09:19.443 "seek_data": false, 00:09:19.443 "copy": true, 00:09:19.443 "nvme_iov_md": false 00:09:19.443 }, 00:09:19.443 "memory_domains": [ 00:09:19.443 { 00:09:19.443 "dma_device_id": "system", 00:09:19.443 "dma_device_type": 1 00:09:19.443 }, 00:09:19.443 { 00:09:19.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.443 "dma_device_type": 2 00:09:19.443 } 00:09:19.443 ], 00:09:19.443 "driver_specific": {} 00:09:19.443 } 00:09:19.443 ] 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.443 "name": "Existed_Raid", 00:09:19.443 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:19.443 "strip_size_kb": 0, 00:09:19.443 "state": "configuring", 00:09:19.443 "raid_level": "raid1", 00:09:19.443 "superblock": true, 00:09:19.443 "num_base_bdevs": 3, 00:09:19.443 "num_base_bdevs_discovered": 2, 00:09:19.443 "num_base_bdevs_operational": 3, 00:09:19.443 "base_bdevs_list": [ 00:09:19.443 { 00:09:19.443 "name": "BaseBdev1", 00:09:19.443 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:19.443 "is_configured": true, 00:09:19.443 "data_offset": 2048, 00:09:19.443 "data_size": 63488 00:09:19.443 }, 00:09:19.443 { 00:09:19.443 "name": "BaseBdev2", 00:09:19.443 "uuid": "b5f45067-92c7-47c3-b4cf-e2387b2ab820", 00:09:19.443 "is_configured": true, 00:09:19.443 "data_offset": 2048, 00:09:19.443 "data_size": 63488 00:09:19.443 }, 00:09:19.443 { 00:09:19.443 "name": "BaseBdev3", 00:09:19.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.443 "is_configured": false, 00:09:19.443 "data_offset": 0, 00:09:19.443 "data_size": 0 00:09:19.443 } 00:09:19.443 ] 00:09:19.443 }' 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.443 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.704 [2024-11-26 22:53:58.820032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.704 [2024-11-26 22:53:58.820328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:19.704 [2024-11-26 22:53:58.820378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.704 [2024-11-26 22:53:58.820698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:19.704 BaseBdev3 00:09:19.704 [2024-11-26 22:53:58.820877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:19.704 [2024-11-26 22:53:58.820892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:19.704 [2024-11-26 22:53:58.821006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.704 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.965 [ 00:09:19.965 { 00:09:19.965 "name": "BaseBdev3", 00:09:19.965 "aliases": [ 00:09:19.965 "c9425087-db4e-4aca-b451-c96341498536" 00:09:19.965 ], 00:09:19.965 "product_name": "Malloc disk", 00:09:19.965 "block_size": 512, 00:09:19.965 "num_blocks": 65536, 00:09:19.965 "uuid": "c9425087-db4e-4aca-b451-c96341498536", 00:09:19.965 "assigned_rate_limits": { 00:09:19.965 "rw_ios_per_sec": 0, 00:09:19.965 "rw_mbytes_per_sec": 0, 00:09:19.965 "r_mbytes_per_sec": 0, 00:09:19.965 "w_mbytes_per_sec": 0 00:09:19.965 }, 00:09:19.965 "claimed": true, 00:09:19.965 "claim_type": "exclusive_write", 00:09:19.965 "zoned": false, 00:09:19.965 "supported_io_types": { 00:09:19.965 "read": true, 00:09:19.965 "write": true, 00:09:19.965 "unmap": true, 00:09:19.965 "flush": true, 00:09:19.965 "reset": true, 00:09:19.965 "nvme_admin": false, 00:09:19.965 "nvme_io": false, 00:09:19.965 "nvme_io_md": false, 00:09:19.965 "write_zeroes": true, 00:09:19.965 "zcopy": true, 00:09:19.965 "get_zone_info": false, 00:09:19.965 "zone_management": false, 00:09:19.965 "zone_append": false, 00:09:19.965 "compare": false, 00:09:19.965 "compare_and_write": false, 00:09:19.965 "abort": true, 00:09:19.965 "seek_hole": false, 00:09:19.965 "seek_data": false, 00:09:19.965 "copy": true, 00:09:19.965 "nvme_iov_md": false 00:09:19.965 }, 00:09:19.965 "memory_domains": [ 00:09:19.965 { 00:09:19.965 "dma_device_id": "system", 00:09:19.965 "dma_device_type": 1 00:09:19.965 }, 00:09:19.965 { 00:09:19.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.965 "dma_device_type": 2 00:09:19.965 } 00:09:19.965 ], 00:09:19.965 "driver_specific": {} 00:09:19.965 } 00:09:19.965 ] 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.965 "name": "Existed_Raid", 00:09:19.965 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:19.965 "strip_size_kb": 0, 00:09:19.965 "state": "online", 00:09:19.965 "raid_level": "raid1", 00:09:19.965 "superblock": true, 00:09:19.965 "num_base_bdevs": 3, 00:09:19.965 "num_base_bdevs_discovered": 3, 00:09:19.965 "num_base_bdevs_operational": 3, 00:09:19.965 "base_bdevs_list": [ 00:09:19.965 { 00:09:19.965 "name": "BaseBdev1", 00:09:19.965 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:19.965 "is_configured": true, 00:09:19.965 "data_offset": 2048, 00:09:19.965 "data_size": 63488 00:09:19.965 }, 00:09:19.965 { 00:09:19.965 "name": "BaseBdev2", 00:09:19.965 "uuid": "b5f45067-92c7-47c3-b4cf-e2387b2ab820", 00:09:19.965 "is_configured": true, 00:09:19.965 "data_offset": 2048, 00:09:19.965 "data_size": 63488 00:09:19.965 }, 00:09:19.965 { 00:09:19.965 "name": "BaseBdev3", 00:09:19.965 "uuid": "c9425087-db4e-4aca-b451-c96341498536", 00:09:19.965 "is_configured": true, 00:09:19.965 "data_offset": 2048, 00:09:19.965 "data_size": 63488 00:09:19.965 } 00:09:19.965 ] 00:09:19.965 }' 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.965 22:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 [2024-11-26 22:53:59.288484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.226 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.226 "name": "Existed_Raid", 00:09:20.226 "aliases": [ 00:09:20.226 "9ce0e376-0af0-463f-b287-b2649001b353" 00:09:20.226 ], 00:09:20.226 "product_name": "Raid Volume", 00:09:20.226 "block_size": 512, 00:09:20.226 "num_blocks": 63488, 00:09:20.226 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:20.226 "assigned_rate_limits": { 00:09:20.226 "rw_ios_per_sec": 0, 00:09:20.226 "rw_mbytes_per_sec": 0, 00:09:20.226 "r_mbytes_per_sec": 0, 00:09:20.226 "w_mbytes_per_sec": 0 00:09:20.226 }, 00:09:20.226 "claimed": false, 00:09:20.226 "zoned": false, 00:09:20.226 "supported_io_types": { 00:09:20.226 "read": true, 00:09:20.226 "write": true, 00:09:20.226 "unmap": false, 00:09:20.226 "flush": false, 00:09:20.226 "reset": true, 00:09:20.226 "nvme_admin": false, 00:09:20.226 "nvme_io": false, 00:09:20.226 "nvme_io_md": false, 00:09:20.226 "write_zeroes": true, 00:09:20.226 "zcopy": false, 00:09:20.226 "get_zone_info": false, 00:09:20.226 "zone_management": false, 00:09:20.226 "zone_append": false, 00:09:20.226 "compare": false, 00:09:20.226 "compare_and_write": false, 00:09:20.226 "abort": false, 00:09:20.226 "seek_hole": false, 00:09:20.226 "seek_data": false, 00:09:20.226 "copy": false, 00:09:20.226 "nvme_iov_md": false 00:09:20.226 }, 00:09:20.226 "memory_domains": [ 00:09:20.226 { 00:09:20.226 "dma_device_id": "system", 00:09:20.226 "dma_device_type": 1 00:09:20.226 }, 00:09:20.226 { 00:09:20.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.226 "dma_device_type": 2 00:09:20.226 }, 00:09:20.226 { 00:09:20.226 "dma_device_id": "system", 00:09:20.226 "dma_device_type": 1 00:09:20.226 }, 00:09:20.226 { 00:09:20.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.226 "dma_device_type": 2 00:09:20.226 }, 00:09:20.226 { 00:09:20.226 "dma_device_id": "system", 00:09:20.226 "dma_device_type": 1 00:09:20.226 }, 00:09:20.226 { 00:09:20.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.226 "dma_device_type": 2 00:09:20.226 } 00:09:20.226 ], 00:09:20.227 "driver_specific": { 00:09:20.227 "raid": { 00:09:20.227 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:20.227 "strip_size_kb": 0, 00:09:20.227 "state": "online", 00:09:20.227 "raid_level": "raid1", 00:09:20.227 "superblock": true, 00:09:20.227 "num_base_bdevs": 3, 00:09:20.227 "num_base_bdevs_discovered": 3, 00:09:20.227 "num_base_bdevs_operational": 3, 00:09:20.227 "base_bdevs_list": [ 00:09:20.227 { 00:09:20.227 "name": "BaseBdev1", 00:09:20.227 "uuid": "7a3648f9-2f54-4646-9cfc-5989af165f18", 00:09:20.227 "is_configured": true, 00:09:20.227 "data_offset": 2048, 00:09:20.227 "data_size": 63488 00:09:20.227 }, 00:09:20.227 { 00:09:20.227 "name": "BaseBdev2", 00:09:20.227 "uuid": "b5f45067-92c7-47c3-b4cf-e2387b2ab820", 00:09:20.227 "is_configured": true, 00:09:20.227 "data_offset": 2048, 00:09:20.227 "data_size": 63488 00:09:20.227 }, 00:09:20.227 { 00:09:20.227 "name": "BaseBdev3", 00:09:20.227 "uuid": "c9425087-db4e-4aca-b451-c96341498536", 00:09:20.227 "is_configured": true, 00:09:20.227 "data_offset": 2048, 00:09:20.227 "data_size": 63488 00:09:20.227 } 00:09:20.227 ] 00:09:20.227 } 00:09:20.227 } 00:09:20.227 }' 00:09:20.227 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.488 BaseBdev2 00:09:20.488 BaseBdev3' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.488 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.489 [2024-11-26 22:53:59.560342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.489 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.749 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.749 "name": "Existed_Raid", 00:09:20.749 "uuid": "9ce0e376-0af0-463f-b287-b2649001b353", 00:09:20.749 "strip_size_kb": 0, 00:09:20.749 "state": "online", 00:09:20.749 "raid_level": "raid1", 00:09:20.749 "superblock": true, 00:09:20.749 "num_base_bdevs": 3, 00:09:20.749 "num_base_bdevs_discovered": 2, 00:09:20.749 "num_base_bdevs_operational": 2, 00:09:20.749 "base_bdevs_list": [ 00:09:20.749 { 00:09:20.749 "name": null, 00:09:20.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.749 "is_configured": false, 00:09:20.749 "data_offset": 0, 00:09:20.749 "data_size": 63488 00:09:20.749 }, 00:09:20.749 { 00:09:20.749 "name": "BaseBdev2", 00:09:20.749 "uuid": "b5f45067-92c7-47c3-b4cf-e2387b2ab820", 00:09:20.749 "is_configured": true, 00:09:20.749 "data_offset": 2048, 00:09:20.749 "data_size": 63488 00:09:20.749 }, 00:09:20.749 { 00:09:20.749 "name": "BaseBdev3", 00:09:20.749 "uuid": "c9425087-db4e-4aca-b451-c96341498536", 00:09:20.749 "is_configured": true, 00:09:20.749 "data_offset": 2048, 00:09:20.749 "data_size": 63488 00:09:20.749 } 00:09:20.749 ] 00:09:20.749 }' 00:09:20.749 22:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.749 22:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.009 [2024-11-26 22:54:00.067690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.009 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.009 [2024-11-26 22:54:00.134920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.009 [2024-11-26 22:54:00.135073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.271 [2024-11-26 22:54:00.146568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.271 [2024-11-26 22:54:00.146693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.271 [2024-11-26 22:54:00.146748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 BaseBdev2 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 [ 00:09:21.271 { 00:09:21.271 "name": "BaseBdev2", 00:09:21.271 "aliases": [ 00:09:21.271 "55419d74-03d6-41e2-b84a-722b071ab564" 00:09:21.271 ], 00:09:21.271 "product_name": "Malloc disk", 00:09:21.271 "block_size": 512, 00:09:21.271 "num_blocks": 65536, 00:09:21.271 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:21.271 "assigned_rate_limits": { 00:09:21.271 "rw_ios_per_sec": 0, 00:09:21.271 "rw_mbytes_per_sec": 0, 00:09:21.271 "r_mbytes_per_sec": 0, 00:09:21.271 "w_mbytes_per_sec": 0 00:09:21.271 }, 00:09:21.271 "claimed": false, 00:09:21.271 "zoned": false, 00:09:21.271 "supported_io_types": { 00:09:21.271 "read": true, 00:09:21.271 "write": true, 00:09:21.271 "unmap": true, 00:09:21.271 "flush": true, 00:09:21.271 "reset": true, 00:09:21.271 "nvme_admin": false, 00:09:21.271 "nvme_io": false, 00:09:21.271 "nvme_io_md": false, 00:09:21.271 "write_zeroes": true, 00:09:21.271 "zcopy": true, 00:09:21.271 "get_zone_info": false, 00:09:21.271 "zone_management": false, 00:09:21.271 "zone_append": false, 00:09:21.271 "compare": false, 00:09:21.271 "compare_and_write": false, 00:09:21.271 "abort": true, 00:09:21.271 "seek_hole": false, 00:09:21.271 "seek_data": false, 00:09:21.271 "copy": true, 00:09:21.271 "nvme_iov_md": false 00:09:21.271 }, 00:09:21.271 "memory_domains": [ 00:09:21.271 { 00:09:21.271 "dma_device_id": "system", 00:09:21.271 "dma_device_type": 1 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.271 "dma_device_type": 2 00:09:21.271 } 00:09:21.271 ], 00:09:21.271 "driver_specific": {} 00:09:21.271 } 00:09:21.271 ] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 BaseBdev3 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 [ 00:09:21.271 { 00:09:21.271 "name": "BaseBdev3", 00:09:21.271 "aliases": [ 00:09:21.271 "eeea23b1-bc6f-447f-9e05-74c8cc64be5c" 00:09:21.271 ], 00:09:21.271 "product_name": "Malloc disk", 00:09:21.271 "block_size": 512, 00:09:21.271 "num_blocks": 65536, 00:09:21.271 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:21.271 "assigned_rate_limits": { 00:09:21.271 "rw_ios_per_sec": 0, 00:09:21.271 "rw_mbytes_per_sec": 0, 00:09:21.271 "r_mbytes_per_sec": 0, 00:09:21.271 "w_mbytes_per_sec": 0 00:09:21.271 }, 00:09:21.271 "claimed": false, 00:09:21.271 "zoned": false, 00:09:21.271 "supported_io_types": { 00:09:21.271 "read": true, 00:09:21.271 "write": true, 00:09:21.271 "unmap": true, 00:09:21.271 "flush": true, 00:09:21.271 "reset": true, 00:09:21.271 "nvme_admin": false, 00:09:21.271 "nvme_io": false, 00:09:21.271 "nvme_io_md": false, 00:09:21.271 "write_zeroes": true, 00:09:21.271 "zcopy": true, 00:09:21.271 "get_zone_info": false, 00:09:21.271 "zone_management": false, 00:09:21.271 "zone_append": false, 00:09:21.271 "compare": false, 00:09:21.271 "compare_and_write": false, 00:09:21.271 "abort": true, 00:09:21.271 "seek_hole": false, 00:09:21.271 "seek_data": false, 00:09:21.271 "copy": true, 00:09:21.271 "nvme_iov_md": false 00:09:21.271 }, 00:09:21.271 "memory_domains": [ 00:09:21.271 { 00:09:21.271 "dma_device_id": "system", 00:09:21.271 "dma_device_type": 1 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.271 "dma_device_type": 2 00:09:21.271 } 00:09:21.271 ], 00:09:21.271 "driver_specific": {} 00:09:21.271 } 00:09:21.271 ] 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.272 [2024-11-26 22:54:00.293355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.272 [2024-11-26 22:54:00.293483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.272 [2024-11-26 22:54:00.293535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.272 [2024-11-26 22:54:00.295321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.272 "name": "Existed_Raid", 00:09:21.272 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:21.272 "strip_size_kb": 0, 00:09:21.272 "state": "configuring", 00:09:21.272 "raid_level": "raid1", 00:09:21.272 "superblock": true, 00:09:21.272 "num_base_bdevs": 3, 00:09:21.272 "num_base_bdevs_discovered": 2, 00:09:21.272 "num_base_bdevs_operational": 3, 00:09:21.272 "base_bdevs_list": [ 00:09:21.272 { 00:09:21.272 "name": "BaseBdev1", 00:09:21.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.272 "is_configured": false, 00:09:21.272 "data_offset": 0, 00:09:21.272 "data_size": 0 00:09:21.272 }, 00:09:21.272 { 00:09:21.272 "name": "BaseBdev2", 00:09:21.272 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:21.272 "is_configured": true, 00:09:21.272 "data_offset": 2048, 00:09:21.272 "data_size": 63488 00:09:21.272 }, 00:09:21.272 { 00:09:21.272 "name": "BaseBdev3", 00:09:21.272 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:21.272 "is_configured": true, 00:09:21.272 "data_offset": 2048, 00:09:21.272 "data_size": 63488 00:09:21.272 } 00:09:21.272 ] 00:09:21.272 }' 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.272 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.843 [2024-11-26 22:54:00.705487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.843 "name": "Existed_Raid", 00:09:21.843 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:21.843 "strip_size_kb": 0, 00:09:21.843 "state": "configuring", 00:09:21.843 "raid_level": "raid1", 00:09:21.843 "superblock": true, 00:09:21.843 "num_base_bdevs": 3, 00:09:21.843 "num_base_bdevs_discovered": 1, 00:09:21.843 "num_base_bdevs_operational": 3, 00:09:21.843 "base_bdevs_list": [ 00:09:21.843 { 00:09:21.843 "name": "BaseBdev1", 00:09:21.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.843 "is_configured": false, 00:09:21.843 "data_offset": 0, 00:09:21.843 "data_size": 0 00:09:21.843 }, 00:09:21.843 { 00:09:21.843 "name": null, 00:09:21.843 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:21.843 "is_configured": false, 00:09:21.843 "data_offset": 0, 00:09:21.843 "data_size": 63488 00:09:21.843 }, 00:09:21.843 { 00:09:21.843 "name": "BaseBdev3", 00:09:21.843 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:21.843 "is_configured": true, 00:09:21.843 "data_offset": 2048, 00:09:21.843 "data_size": 63488 00:09:21.843 } 00:09:21.843 ] 00:09:21.843 }' 00:09:21.843 22:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.844 22:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 [2024-11-26 22:54:01.164349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.105 BaseBdev1 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 [ 00:09:22.105 { 00:09:22.105 "name": "BaseBdev1", 00:09:22.105 "aliases": [ 00:09:22.105 "a543a328-7937-48b3-9a33-ef79b0057d4e" 00:09:22.105 ], 00:09:22.105 "product_name": "Malloc disk", 00:09:22.105 "block_size": 512, 00:09:22.105 "num_blocks": 65536, 00:09:22.105 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:22.105 "assigned_rate_limits": { 00:09:22.105 "rw_ios_per_sec": 0, 00:09:22.105 "rw_mbytes_per_sec": 0, 00:09:22.105 "r_mbytes_per_sec": 0, 00:09:22.105 "w_mbytes_per_sec": 0 00:09:22.105 }, 00:09:22.105 "claimed": true, 00:09:22.105 "claim_type": "exclusive_write", 00:09:22.105 "zoned": false, 00:09:22.105 "supported_io_types": { 00:09:22.105 "read": true, 00:09:22.105 "write": true, 00:09:22.105 "unmap": true, 00:09:22.105 "flush": true, 00:09:22.105 "reset": true, 00:09:22.105 "nvme_admin": false, 00:09:22.105 "nvme_io": false, 00:09:22.105 "nvme_io_md": false, 00:09:22.105 "write_zeroes": true, 00:09:22.105 "zcopy": true, 00:09:22.105 "get_zone_info": false, 00:09:22.105 "zone_management": false, 00:09:22.105 "zone_append": false, 00:09:22.105 "compare": false, 00:09:22.105 "compare_and_write": false, 00:09:22.105 "abort": true, 00:09:22.105 "seek_hole": false, 00:09:22.105 "seek_data": false, 00:09:22.105 "copy": true, 00:09:22.105 "nvme_iov_md": false 00:09:22.105 }, 00:09:22.105 "memory_domains": [ 00:09:22.105 { 00:09:22.105 "dma_device_id": "system", 00:09:22.105 "dma_device_type": 1 00:09:22.105 }, 00:09:22.105 { 00:09:22.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.105 "dma_device_type": 2 00:09:22.105 } 00:09:22.105 ], 00:09:22.105 "driver_specific": {} 00:09:22.105 } 00:09:22.105 ] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.365 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.365 "name": "Existed_Raid", 00:09:22.365 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:22.365 "strip_size_kb": 0, 00:09:22.365 "state": "configuring", 00:09:22.365 "raid_level": "raid1", 00:09:22.365 "superblock": true, 00:09:22.365 "num_base_bdevs": 3, 00:09:22.365 "num_base_bdevs_discovered": 2, 00:09:22.365 "num_base_bdevs_operational": 3, 00:09:22.365 "base_bdevs_list": [ 00:09:22.365 { 00:09:22.365 "name": "BaseBdev1", 00:09:22.365 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:22.365 "is_configured": true, 00:09:22.365 "data_offset": 2048, 00:09:22.365 "data_size": 63488 00:09:22.365 }, 00:09:22.365 { 00:09:22.365 "name": null, 00:09:22.365 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:22.365 "is_configured": false, 00:09:22.365 "data_offset": 0, 00:09:22.365 "data_size": 63488 00:09:22.365 }, 00:09:22.365 { 00:09:22.365 "name": "BaseBdev3", 00:09:22.365 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:22.365 "is_configured": true, 00:09:22.365 "data_offset": 2048, 00:09:22.365 "data_size": 63488 00:09:22.365 } 00:09:22.365 ] 00:09:22.365 }' 00:09:22.365 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.365 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.624 [2024-11-26 22:54:01.676543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.624 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.624 "name": "Existed_Raid", 00:09:22.624 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:22.624 "strip_size_kb": 0, 00:09:22.624 "state": "configuring", 00:09:22.624 "raid_level": "raid1", 00:09:22.624 "superblock": true, 00:09:22.624 "num_base_bdevs": 3, 00:09:22.624 "num_base_bdevs_discovered": 1, 00:09:22.624 "num_base_bdevs_operational": 3, 00:09:22.624 "base_bdevs_list": [ 00:09:22.624 { 00:09:22.624 "name": "BaseBdev1", 00:09:22.624 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:22.624 "is_configured": true, 00:09:22.624 "data_offset": 2048, 00:09:22.624 "data_size": 63488 00:09:22.624 }, 00:09:22.624 { 00:09:22.624 "name": null, 00:09:22.624 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:22.624 "is_configured": false, 00:09:22.624 "data_offset": 0, 00:09:22.625 "data_size": 63488 00:09:22.625 }, 00:09:22.625 { 00:09:22.625 "name": null, 00:09:22.625 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:22.625 "is_configured": false, 00:09:22.625 "data_offset": 0, 00:09:22.625 "data_size": 63488 00:09:22.625 } 00:09:22.625 ] 00:09:22.625 }' 00:09:22.625 22:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.625 22:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.200 [2024-11-26 22:54:02.172719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.200 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.201 "name": "Existed_Raid", 00:09:23.201 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:23.201 "strip_size_kb": 0, 00:09:23.201 "state": "configuring", 00:09:23.201 "raid_level": "raid1", 00:09:23.201 "superblock": true, 00:09:23.201 "num_base_bdevs": 3, 00:09:23.201 "num_base_bdevs_discovered": 2, 00:09:23.201 "num_base_bdevs_operational": 3, 00:09:23.201 "base_bdevs_list": [ 00:09:23.201 { 00:09:23.201 "name": "BaseBdev1", 00:09:23.201 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:23.201 "is_configured": true, 00:09:23.201 "data_offset": 2048, 00:09:23.201 "data_size": 63488 00:09:23.201 }, 00:09:23.201 { 00:09:23.201 "name": null, 00:09:23.201 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:23.201 "is_configured": false, 00:09:23.201 "data_offset": 0, 00:09:23.201 "data_size": 63488 00:09:23.201 }, 00:09:23.201 { 00:09:23.201 "name": "BaseBdev3", 00:09:23.201 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:23.201 "is_configured": true, 00:09:23.201 "data_offset": 2048, 00:09:23.201 "data_size": 63488 00:09:23.201 } 00:09:23.201 ] 00:09:23.201 }' 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.201 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.466 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.466 [2024-11-26 22:54:02.588854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.726 "name": "Existed_Raid", 00:09:23.726 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:23.726 "strip_size_kb": 0, 00:09:23.726 "state": "configuring", 00:09:23.726 "raid_level": "raid1", 00:09:23.726 "superblock": true, 00:09:23.726 "num_base_bdevs": 3, 00:09:23.726 "num_base_bdevs_discovered": 1, 00:09:23.726 "num_base_bdevs_operational": 3, 00:09:23.726 "base_bdevs_list": [ 00:09:23.726 { 00:09:23.726 "name": null, 00:09:23.726 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:23.726 "is_configured": false, 00:09:23.726 "data_offset": 0, 00:09:23.726 "data_size": 63488 00:09:23.726 }, 00:09:23.726 { 00:09:23.726 "name": null, 00:09:23.726 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:23.726 "is_configured": false, 00:09:23.726 "data_offset": 0, 00:09:23.726 "data_size": 63488 00:09:23.726 }, 00:09:23.726 { 00:09:23.726 "name": "BaseBdev3", 00:09:23.726 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:23.726 "is_configured": true, 00:09:23.726 "data_offset": 2048, 00:09:23.726 "data_size": 63488 00:09:23.726 } 00:09:23.726 ] 00:09:23.726 }' 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.726 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.986 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.986 22:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.986 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.986 22:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.986 [2024-11-26 22:54:03.047426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.986 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.987 "name": "Existed_Raid", 00:09:23.987 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:23.987 "strip_size_kb": 0, 00:09:23.987 "state": "configuring", 00:09:23.987 "raid_level": "raid1", 00:09:23.987 "superblock": true, 00:09:23.987 "num_base_bdevs": 3, 00:09:23.987 "num_base_bdevs_discovered": 2, 00:09:23.987 "num_base_bdevs_operational": 3, 00:09:23.987 "base_bdevs_list": [ 00:09:23.987 { 00:09:23.987 "name": null, 00:09:23.987 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:23.987 "is_configured": false, 00:09:23.987 "data_offset": 0, 00:09:23.987 "data_size": 63488 00:09:23.987 }, 00:09:23.987 { 00:09:23.987 "name": "BaseBdev2", 00:09:23.987 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:23.987 "is_configured": true, 00:09:23.987 "data_offset": 2048, 00:09:23.987 "data_size": 63488 00:09:23.987 }, 00:09:23.987 { 00:09:23.987 "name": "BaseBdev3", 00:09:23.987 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:23.987 "is_configured": true, 00:09:23.987 "data_offset": 2048, 00:09:23.987 "data_size": 63488 00:09:23.987 } 00:09:23.987 ] 00:09:23.987 }' 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.987 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a543a328-7937-48b3-9a33-ef79b0057d4e 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 [2024-11-26 22:54:03.610442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.558 [2024-11-26 22:54:03.610684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.558 [2024-11-26 22:54:03.610739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.558 [2024-11-26 22:54:03.610994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:24.558 NewBaseBdev 00:09:24.558 [2024-11-26 22:54:03.611147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.558 [2024-11-26 22:54:03.611156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:24.558 [2024-11-26 22:54:03.611266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 [ 00:09:24.558 { 00:09:24.558 "name": "NewBaseBdev", 00:09:24.558 "aliases": [ 00:09:24.558 "a543a328-7937-48b3-9a33-ef79b0057d4e" 00:09:24.558 ], 00:09:24.558 "product_name": "Malloc disk", 00:09:24.558 "block_size": 512, 00:09:24.558 "num_blocks": 65536, 00:09:24.558 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:24.558 "assigned_rate_limits": { 00:09:24.558 "rw_ios_per_sec": 0, 00:09:24.558 "rw_mbytes_per_sec": 0, 00:09:24.558 "r_mbytes_per_sec": 0, 00:09:24.558 "w_mbytes_per_sec": 0 00:09:24.558 }, 00:09:24.558 "claimed": true, 00:09:24.558 "claim_type": "exclusive_write", 00:09:24.558 "zoned": false, 00:09:24.558 "supported_io_types": { 00:09:24.558 "read": true, 00:09:24.558 "write": true, 00:09:24.558 "unmap": true, 00:09:24.558 "flush": true, 00:09:24.558 "reset": true, 00:09:24.558 "nvme_admin": false, 00:09:24.558 "nvme_io": false, 00:09:24.558 "nvme_io_md": false, 00:09:24.558 "write_zeroes": true, 00:09:24.558 "zcopy": true, 00:09:24.558 "get_zone_info": false, 00:09:24.558 "zone_management": false, 00:09:24.558 "zone_append": false, 00:09:24.558 "compare": false, 00:09:24.558 "compare_and_write": false, 00:09:24.558 "abort": true, 00:09:24.558 "seek_hole": false, 00:09:24.558 "seek_data": false, 00:09:24.558 "copy": true, 00:09:24.558 "nvme_iov_md": false 00:09:24.558 }, 00:09:24.558 "memory_domains": [ 00:09:24.558 { 00:09:24.558 "dma_device_id": "system", 00:09:24.558 "dma_device_type": 1 00:09:24.558 }, 00:09:24.558 { 00:09:24.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.558 "dma_device_type": 2 00:09:24.558 } 00:09:24.558 ], 00:09:24.558 "driver_specific": {} 00:09:24.558 } 00:09:24.558 ] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.818 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.818 "name": "Existed_Raid", 00:09:24.818 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:24.818 "strip_size_kb": 0, 00:09:24.818 "state": "online", 00:09:24.818 "raid_level": "raid1", 00:09:24.818 "superblock": true, 00:09:24.818 "num_base_bdevs": 3, 00:09:24.818 "num_base_bdevs_discovered": 3, 00:09:24.818 "num_base_bdevs_operational": 3, 00:09:24.818 "base_bdevs_list": [ 00:09:24.818 { 00:09:24.818 "name": "NewBaseBdev", 00:09:24.818 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:24.818 "is_configured": true, 00:09:24.818 "data_offset": 2048, 00:09:24.818 "data_size": 63488 00:09:24.818 }, 00:09:24.818 { 00:09:24.818 "name": "BaseBdev2", 00:09:24.818 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:24.818 "is_configured": true, 00:09:24.818 "data_offset": 2048, 00:09:24.818 "data_size": 63488 00:09:24.818 }, 00:09:24.818 { 00:09:24.818 "name": "BaseBdev3", 00:09:24.818 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:24.818 "is_configured": true, 00:09:24.818 "data_offset": 2048, 00:09:24.818 "data_size": 63488 00:09:24.818 } 00:09:24.818 ] 00:09:24.818 }' 00:09:24.818 22:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.818 22:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.078 [2024-11-26 22:54:04.107000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.078 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.078 "name": "Existed_Raid", 00:09:25.078 "aliases": [ 00:09:25.078 "8a27923b-cf04-42d1-b7f5-2b6c1da28d31" 00:09:25.078 ], 00:09:25.078 "product_name": "Raid Volume", 00:09:25.078 "block_size": 512, 00:09:25.078 "num_blocks": 63488, 00:09:25.078 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:25.078 "assigned_rate_limits": { 00:09:25.078 "rw_ios_per_sec": 0, 00:09:25.078 "rw_mbytes_per_sec": 0, 00:09:25.078 "r_mbytes_per_sec": 0, 00:09:25.078 "w_mbytes_per_sec": 0 00:09:25.078 }, 00:09:25.078 "claimed": false, 00:09:25.078 "zoned": false, 00:09:25.078 "supported_io_types": { 00:09:25.079 "read": true, 00:09:25.079 "write": true, 00:09:25.079 "unmap": false, 00:09:25.079 "flush": false, 00:09:25.079 "reset": true, 00:09:25.079 "nvme_admin": false, 00:09:25.079 "nvme_io": false, 00:09:25.079 "nvme_io_md": false, 00:09:25.079 "write_zeroes": true, 00:09:25.079 "zcopy": false, 00:09:25.079 "get_zone_info": false, 00:09:25.079 "zone_management": false, 00:09:25.079 "zone_append": false, 00:09:25.079 "compare": false, 00:09:25.079 "compare_and_write": false, 00:09:25.079 "abort": false, 00:09:25.079 "seek_hole": false, 00:09:25.079 "seek_data": false, 00:09:25.079 "copy": false, 00:09:25.079 "nvme_iov_md": false 00:09:25.079 }, 00:09:25.079 "memory_domains": [ 00:09:25.079 { 00:09:25.079 "dma_device_id": "system", 00:09:25.079 "dma_device_type": 1 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.079 "dma_device_type": 2 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "dma_device_id": "system", 00:09:25.079 "dma_device_type": 1 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.079 "dma_device_type": 2 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "dma_device_id": "system", 00:09:25.079 "dma_device_type": 1 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.079 "dma_device_type": 2 00:09:25.079 } 00:09:25.079 ], 00:09:25.079 "driver_specific": { 00:09:25.079 "raid": { 00:09:25.079 "uuid": "8a27923b-cf04-42d1-b7f5-2b6c1da28d31", 00:09:25.079 "strip_size_kb": 0, 00:09:25.079 "state": "online", 00:09:25.079 "raid_level": "raid1", 00:09:25.079 "superblock": true, 00:09:25.079 "num_base_bdevs": 3, 00:09:25.079 "num_base_bdevs_discovered": 3, 00:09:25.079 "num_base_bdevs_operational": 3, 00:09:25.079 "base_bdevs_list": [ 00:09:25.079 { 00:09:25.079 "name": "NewBaseBdev", 00:09:25.079 "uuid": "a543a328-7937-48b3-9a33-ef79b0057d4e", 00:09:25.079 "is_configured": true, 00:09:25.079 "data_offset": 2048, 00:09:25.079 "data_size": 63488 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "name": "BaseBdev2", 00:09:25.079 "uuid": "55419d74-03d6-41e2-b84a-722b071ab564", 00:09:25.079 "is_configured": true, 00:09:25.079 "data_offset": 2048, 00:09:25.079 "data_size": 63488 00:09:25.079 }, 00:09:25.079 { 00:09:25.079 "name": "BaseBdev3", 00:09:25.079 "uuid": "eeea23b1-bc6f-447f-9e05-74c8cc64be5c", 00:09:25.079 "is_configured": true, 00:09:25.079 "data_offset": 2048, 00:09:25.079 "data_size": 63488 00:09:25.079 } 00:09:25.079 ] 00:09:25.079 } 00:09:25.079 } 00:09:25.079 }' 00:09:25.079 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.079 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.079 BaseBdev2 00:09:25.079 BaseBdev3' 00:09:25.079 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.339 [2024-11-26 22:54:04.374686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.339 [2024-11-26 22:54:04.374757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.339 [2024-11-26 22:54:04.374850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.339 [2024-11-26 22:54:04.375130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.339 [2024-11-26 22:54:04.375182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80656 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80656 ']' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80656 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80656 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80656' 00:09:25.339 killing process with pid 80656 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80656 00:09:25.339 [2024-11-26 22:54:04.424738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.339 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80656 00:09:25.339 [2024-11-26 22:54:04.455341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.598 ************************************ 00:09:25.598 END TEST raid_state_function_test_sb 00:09:25.598 ************************************ 00:09:25.598 22:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:25.598 00:09:25.598 real 0m8.564s 00:09:25.598 user 0m14.590s 00:09:25.598 sys 0m1.766s 00:09:25.598 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.598 22:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.858 22:54:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:25.858 22:54:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.858 22:54:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.858 22:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.858 ************************************ 00:09:25.858 START TEST raid_superblock_test 00:09:25.858 ************************************ 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81256 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81256 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81256 ']' 00:09:25.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.858 22:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.858 [2024-11-26 22:54:04.841005] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:25.858 [2024-11-26 22:54:04.841240] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81256 ] 00:09:25.858 [2024-11-26 22:54:04.975375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.118 [2024-11-26 22:54:04.994850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.118 [2024-11-26 22:54:05.020628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.118 [2024-11-26 22:54:05.061910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.118 [2024-11-26 22:54:05.062018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 malloc1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 [2024-11-26 22:54:05.690134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.688 [2024-11-26 22:54:05.690326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.688 [2024-11-26 22:54:05.690392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:26.688 [2024-11-26 22:54:05.690426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.688 [2024-11-26 22:54:05.692472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.688 [2024-11-26 22:54:05.692537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.688 pt1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 malloc2 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 [2024-11-26 22:54:05.718414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.688 [2024-11-26 22:54:05.718525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.688 [2024-11-26 22:54:05.718547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:26.688 [2024-11-26 22:54:05.718556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.688 [2024-11-26 22:54:05.720535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.688 [2024-11-26 22:54:05.720571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.688 pt2 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 malloc3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 [2024-11-26 22:54:05.746682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.688 [2024-11-26 22:54:05.746790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.688 [2024-11-26 22:54:05.746842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:26.688 [2024-11-26 22:54:05.746870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.688 [2024-11-26 22:54:05.748785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.688 [2024-11-26 22:54:05.748850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.688 pt3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 [2024-11-26 22:54:05.758722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.688 [2024-11-26 22:54:05.760414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.688 [2024-11-26 22:54:05.760505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.688 [2024-11-26 22:54:05.760651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:26.688 [2024-11-26 22:54:05.760695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.688 [2024-11-26 22:54:05.760942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:26.688 [2024-11-26 22:54:05.761114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:26.688 [2024-11-26 22:54:05.761153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:26.688 [2024-11-26 22:54:05.761312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.688 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.949 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.949 "name": "raid_bdev1", 00:09:26.949 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:26.949 "strip_size_kb": 0, 00:09:26.949 "state": "online", 00:09:26.949 "raid_level": "raid1", 00:09:26.949 "superblock": true, 00:09:26.949 "num_base_bdevs": 3, 00:09:26.949 "num_base_bdevs_discovered": 3, 00:09:26.949 "num_base_bdevs_operational": 3, 00:09:26.949 "base_bdevs_list": [ 00:09:26.949 { 00:09:26.949 "name": "pt1", 00:09:26.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.949 "is_configured": true, 00:09:26.949 "data_offset": 2048, 00:09:26.949 "data_size": 63488 00:09:26.949 }, 00:09:26.949 { 00:09:26.949 "name": "pt2", 00:09:26.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.949 "is_configured": true, 00:09:26.949 "data_offset": 2048, 00:09:26.949 "data_size": 63488 00:09:26.949 }, 00:09:26.949 { 00:09:26.949 "name": "pt3", 00:09:26.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.949 "is_configured": true, 00:09:26.949 "data_offset": 2048, 00:09:26.949 "data_size": 63488 00:09:26.949 } 00:09:26.949 ] 00:09:26.949 }' 00:09:26.949 22:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.949 22:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.210 [2024-11-26 22:54:06.131168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.210 "name": "raid_bdev1", 00:09:27.210 "aliases": [ 00:09:27.210 "a48fb232-cab2-499f-8a52-fda876a1cd48" 00:09:27.210 ], 00:09:27.210 "product_name": "Raid Volume", 00:09:27.210 "block_size": 512, 00:09:27.210 "num_blocks": 63488, 00:09:27.210 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:27.210 "assigned_rate_limits": { 00:09:27.210 "rw_ios_per_sec": 0, 00:09:27.210 "rw_mbytes_per_sec": 0, 00:09:27.210 "r_mbytes_per_sec": 0, 00:09:27.210 "w_mbytes_per_sec": 0 00:09:27.210 }, 00:09:27.210 "claimed": false, 00:09:27.210 "zoned": false, 00:09:27.210 "supported_io_types": { 00:09:27.210 "read": true, 00:09:27.210 "write": true, 00:09:27.210 "unmap": false, 00:09:27.210 "flush": false, 00:09:27.210 "reset": true, 00:09:27.210 "nvme_admin": false, 00:09:27.210 "nvme_io": false, 00:09:27.210 "nvme_io_md": false, 00:09:27.210 "write_zeroes": true, 00:09:27.210 "zcopy": false, 00:09:27.210 "get_zone_info": false, 00:09:27.210 "zone_management": false, 00:09:27.210 "zone_append": false, 00:09:27.210 "compare": false, 00:09:27.210 "compare_and_write": false, 00:09:27.210 "abort": false, 00:09:27.210 "seek_hole": false, 00:09:27.210 "seek_data": false, 00:09:27.210 "copy": false, 00:09:27.210 "nvme_iov_md": false 00:09:27.210 }, 00:09:27.210 "memory_domains": [ 00:09:27.210 { 00:09:27.210 "dma_device_id": "system", 00:09:27.210 "dma_device_type": 1 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.210 "dma_device_type": 2 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "dma_device_id": "system", 00:09:27.210 "dma_device_type": 1 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.210 "dma_device_type": 2 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "dma_device_id": "system", 00:09:27.210 "dma_device_type": 1 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.210 "dma_device_type": 2 00:09:27.210 } 00:09:27.210 ], 00:09:27.210 "driver_specific": { 00:09:27.210 "raid": { 00:09:27.210 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:27.210 "strip_size_kb": 0, 00:09:27.210 "state": "online", 00:09:27.210 "raid_level": "raid1", 00:09:27.210 "superblock": true, 00:09:27.210 "num_base_bdevs": 3, 00:09:27.210 "num_base_bdevs_discovered": 3, 00:09:27.210 "num_base_bdevs_operational": 3, 00:09:27.210 "base_bdevs_list": [ 00:09:27.210 { 00:09:27.210 "name": "pt1", 00:09:27.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.210 "is_configured": true, 00:09:27.210 "data_offset": 2048, 00:09:27.210 "data_size": 63488 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "name": "pt2", 00:09:27.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.210 "is_configured": true, 00:09:27.210 "data_offset": 2048, 00:09:27.210 "data_size": 63488 00:09:27.210 }, 00:09:27.210 { 00:09:27.210 "name": "pt3", 00:09:27.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.210 "is_configured": true, 00:09:27.210 "data_offset": 2048, 00:09:27.210 "data_size": 63488 00:09:27.210 } 00:09:27.210 ] 00:09:27.210 } 00:09:27.210 } 00:09:27.210 }' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.210 pt2 00:09:27.210 pt3' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.210 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:27.471 [2024-11-26 22:54:06.411148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a48fb232-cab2-499f-8a52-fda876a1cd48 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a48fb232-cab2-499f-8a52-fda876a1cd48 ']' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 [2024-11-26 22:54:06.458865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.471 [2024-11-26 22:54:06.458942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.471 [2024-11-26 22:54:06.459035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.471 [2024-11-26 22:54:06.459128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.471 [2024-11-26 22:54:06.459175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.732 [2024-11-26 22:54:06.614960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:27.732 [2024-11-26 22:54:06.616821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:27.732 [2024-11-26 22:54:06.616916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:27.732 [2024-11-26 22:54:06.616987] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:27.732 [2024-11-26 22:54:06.617104] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:27.732 [2024-11-26 22:54:06.617165] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:27.732 [2024-11-26 22:54:06.617211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.732 [2024-11-26 22:54:06.617280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:27.732 request: 00:09:27.732 { 00:09:27.732 "name": "raid_bdev1", 00:09:27.732 "raid_level": "raid1", 00:09:27.732 "base_bdevs": [ 00:09:27.732 "malloc1", 00:09:27.732 "malloc2", 00:09:27.732 "malloc3" 00:09:27.732 ], 00:09:27.732 "superblock": false, 00:09:27.732 "method": "bdev_raid_create", 00:09:27.732 "req_id": 1 00:09:27.732 } 00:09:27.732 Got JSON-RPC error response 00:09:27.732 response: 00:09:27.732 { 00:09:27.732 "code": -17, 00:09:27.732 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:27.732 } 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.732 [2024-11-26 22:54:06.666917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:27.732 [2024-11-26 22:54:06.667010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.732 [2024-11-26 22:54:06.667046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:27.732 [2024-11-26 22:54:06.667079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.732 [2024-11-26 22:54:06.669124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.732 [2024-11-26 22:54:06.669190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:27.732 [2024-11-26 22:54:06.669289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:27.732 [2024-11-26 22:54:06.669342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.732 pt1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.732 "name": "raid_bdev1", 00:09:27.732 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:27.732 "strip_size_kb": 0, 00:09:27.732 "state": "configuring", 00:09:27.732 "raid_level": "raid1", 00:09:27.732 "superblock": true, 00:09:27.732 "num_base_bdevs": 3, 00:09:27.732 "num_base_bdevs_discovered": 1, 00:09:27.732 "num_base_bdevs_operational": 3, 00:09:27.732 "base_bdevs_list": [ 00:09:27.732 { 00:09:27.732 "name": "pt1", 00:09:27.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.732 "is_configured": true, 00:09:27.732 "data_offset": 2048, 00:09:27.732 "data_size": 63488 00:09:27.732 }, 00:09:27.732 { 00:09:27.732 "name": null, 00:09:27.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.732 "is_configured": false, 00:09:27.732 "data_offset": 2048, 00:09:27.732 "data_size": 63488 00:09:27.732 }, 00:09:27.732 { 00:09:27.732 "name": null, 00:09:27.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.732 "is_configured": false, 00:09:27.732 "data_offset": 2048, 00:09:27.732 "data_size": 63488 00:09:27.732 } 00:09:27.732 ] 00:09:27.732 }' 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.732 22:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.992 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.993 [2024-11-26 22:54:07.107073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.993 [2024-11-26 22:54:07.107210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.993 [2024-11-26 22:54:07.107273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:27.993 [2024-11-26 22:54:07.107306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.993 [2024-11-26 22:54:07.107758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.993 [2024-11-26 22:54:07.107822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.993 [2024-11-26 22:54:07.107931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:27.993 [2024-11-26 22:54:07.107980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.993 pt2 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.993 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.993 [2024-11-26 22:54:07.115095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.252 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.253 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.253 "name": "raid_bdev1", 00:09:28.253 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:28.253 "strip_size_kb": 0, 00:09:28.253 "state": "configuring", 00:09:28.253 "raid_level": "raid1", 00:09:28.253 "superblock": true, 00:09:28.253 "num_base_bdevs": 3, 00:09:28.253 "num_base_bdevs_discovered": 1, 00:09:28.253 "num_base_bdevs_operational": 3, 00:09:28.253 "base_bdevs_list": [ 00:09:28.253 { 00:09:28.253 "name": "pt1", 00:09:28.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.253 "is_configured": true, 00:09:28.253 "data_offset": 2048, 00:09:28.253 "data_size": 63488 00:09:28.253 }, 00:09:28.253 { 00:09:28.253 "name": null, 00:09:28.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.253 "is_configured": false, 00:09:28.253 "data_offset": 0, 00:09:28.253 "data_size": 63488 00:09:28.253 }, 00:09:28.253 { 00:09:28.253 "name": null, 00:09:28.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.253 "is_configured": false, 00:09:28.253 "data_offset": 2048, 00:09:28.253 "data_size": 63488 00:09:28.253 } 00:09:28.253 ] 00:09:28.253 }' 00:09:28.253 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.253 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.513 [2024-11-26 22:54:07.555180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.513 [2024-11-26 22:54:07.555332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.513 [2024-11-26 22:54:07.555371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:28.513 [2024-11-26 22:54:07.555425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.513 [2024-11-26 22:54:07.555832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.513 [2024-11-26 22:54:07.555898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.513 [2024-11-26 22:54:07.555998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.513 [2024-11-26 22:54:07.556049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.513 pt2 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.513 [2024-11-26 22:54:07.567157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.513 [2024-11-26 22:54:07.567261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.513 [2024-11-26 22:54:07.567291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:28.513 [2024-11-26 22:54:07.567319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.513 [2024-11-26 22:54:07.567647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.513 [2024-11-26 22:54:07.567708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.513 [2024-11-26 22:54:07.567787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:28.513 [2024-11-26 22:54:07.567835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.513 [2024-11-26 22:54:07.567952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:28.513 [2024-11-26 22:54:07.567996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.513 [2024-11-26 22:54:07.568233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:28.513 [2024-11-26 22:54:07.568395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:28.513 [2024-11-26 22:54:07.568430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:28.513 [2024-11-26 22:54:07.568571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.513 pt3 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.513 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.513 "name": "raid_bdev1", 00:09:28.513 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:28.513 "strip_size_kb": 0, 00:09:28.513 "state": "online", 00:09:28.513 "raid_level": "raid1", 00:09:28.513 "superblock": true, 00:09:28.513 "num_base_bdevs": 3, 00:09:28.513 "num_base_bdevs_discovered": 3, 00:09:28.513 "num_base_bdevs_operational": 3, 00:09:28.513 "base_bdevs_list": [ 00:09:28.513 { 00:09:28.513 "name": "pt1", 00:09:28.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.513 "is_configured": true, 00:09:28.513 "data_offset": 2048, 00:09:28.514 "data_size": 63488 00:09:28.514 }, 00:09:28.514 { 00:09:28.514 "name": "pt2", 00:09:28.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.514 "is_configured": true, 00:09:28.514 "data_offset": 2048, 00:09:28.514 "data_size": 63488 00:09:28.514 }, 00:09:28.514 { 00:09:28.514 "name": "pt3", 00:09:28.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.514 "is_configured": true, 00:09:28.514 "data_offset": 2048, 00:09:28.514 "data_size": 63488 00:09:28.514 } 00:09:28.514 ] 00:09:28.514 }' 00:09:28.514 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.514 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.084 22:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.084 [2024-11-26 22:54:07.995574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.084 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.084 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.084 "name": "raid_bdev1", 00:09:29.084 "aliases": [ 00:09:29.084 "a48fb232-cab2-499f-8a52-fda876a1cd48" 00:09:29.084 ], 00:09:29.084 "product_name": "Raid Volume", 00:09:29.084 "block_size": 512, 00:09:29.084 "num_blocks": 63488, 00:09:29.084 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:29.084 "assigned_rate_limits": { 00:09:29.084 "rw_ios_per_sec": 0, 00:09:29.084 "rw_mbytes_per_sec": 0, 00:09:29.084 "r_mbytes_per_sec": 0, 00:09:29.084 "w_mbytes_per_sec": 0 00:09:29.084 }, 00:09:29.084 "claimed": false, 00:09:29.084 "zoned": false, 00:09:29.084 "supported_io_types": { 00:09:29.084 "read": true, 00:09:29.084 "write": true, 00:09:29.084 "unmap": false, 00:09:29.084 "flush": false, 00:09:29.084 "reset": true, 00:09:29.084 "nvme_admin": false, 00:09:29.084 "nvme_io": false, 00:09:29.084 "nvme_io_md": false, 00:09:29.084 "write_zeroes": true, 00:09:29.084 "zcopy": false, 00:09:29.084 "get_zone_info": false, 00:09:29.084 "zone_management": false, 00:09:29.084 "zone_append": false, 00:09:29.084 "compare": false, 00:09:29.084 "compare_and_write": false, 00:09:29.084 "abort": false, 00:09:29.084 "seek_hole": false, 00:09:29.084 "seek_data": false, 00:09:29.084 "copy": false, 00:09:29.084 "nvme_iov_md": false 00:09:29.084 }, 00:09:29.084 "memory_domains": [ 00:09:29.084 { 00:09:29.084 "dma_device_id": "system", 00:09:29.084 "dma_device_type": 1 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.084 "dma_device_type": 2 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "dma_device_id": "system", 00:09:29.084 "dma_device_type": 1 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.084 "dma_device_type": 2 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "dma_device_id": "system", 00:09:29.084 "dma_device_type": 1 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.084 "dma_device_type": 2 00:09:29.084 } 00:09:29.084 ], 00:09:29.084 "driver_specific": { 00:09:29.084 "raid": { 00:09:29.084 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:29.084 "strip_size_kb": 0, 00:09:29.084 "state": "online", 00:09:29.084 "raid_level": "raid1", 00:09:29.084 "superblock": true, 00:09:29.084 "num_base_bdevs": 3, 00:09:29.084 "num_base_bdevs_discovered": 3, 00:09:29.084 "num_base_bdevs_operational": 3, 00:09:29.084 "base_bdevs_list": [ 00:09:29.084 { 00:09:29.084 "name": "pt1", 00:09:29.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.084 "is_configured": true, 00:09:29.084 "data_offset": 2048, 00:09:29.084 "data_size": 63488 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "name": "pt2", 00:09:29.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.084 "is_configured": true, 00:09:29.084 "data_offset": 2048, 00:09:29.084 "data_size": 63488 00:09:29.084 }, 00:09:29.084 { 00:09:29.084 "name": "pt3", 00:09:29.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.084 "is_configured": true, 00:09:29.084 "data_offset": 2048, 00:09:29.084 "data_size": 63488 00:09:29.084 } 00:09:29.084 ] 00:09:29.084 } 00:09:29.085 } 00:09:29.085 }' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.085 pt2 00:09:29.085 pt3' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.085 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.345 [2024-11-26 22:54:08.259644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a48fb232-cab2-499f-8a52-fda876a1cd48 '!=' a48fb232-cab2-499f-8a52-fda876a1cd48 ']' 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.345 [2024-11-26 22:54:08.287391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.345 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.345 "name": "raid_bdev1", 00:09:29.346 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:29.346 "strip_size_kb": 0, 00:09:29.346 "state": "online", 00:09:29.346 "raid_level": "raid1", 00:09:29.346 "superblock": true, 00:09:29.346 "num_base_bdevs": 3, 00:09:29.346 "num_base_bdevs_discovered": 2, 00:09:29.346 "num_base_bdevs_operational": 2, 00:09:29.346 "base_bdevs_list": [ 00:09:29.346 { 00:09:29.346 "name": null, 00:09:29.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.346 "is_configured": false, 00:09:29.346 "data_offset": 0, 00:09:29.346 "data_size": 63488 00:09:29.346 }, 00:09:29.346 { 00:09:29.346 "name": "pt2", 00:09:29.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.346 "is_configured": true, 00:09:29.346 "data_offset": 2048, 00:09:29.346 "data_size": 63488 00:09:29.346 }, 00:09:29.346 { 00:09:29.346 "name": "pt3", 00:09:29.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.346 "is_configured": true, 00:09:29.346 "data_offset": 2048, 00:09:29.346 "data_size": 63488 00:09:29.346 } 00:09:29.346 ] 00:09:29.346 }' 00:09:29.346 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.346 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.606 [2024-11-26 22:54:08.719496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.606 [2024-11-26 22:54:08.719526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.606 [2024-11-26 22:54:08.719596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.606 [2024-11-26 22:54:08.719654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.606 [2024-11-26 22:54:08.719666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.606 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 [2024-11-26 22:54:08.803522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.866 [2024-11-26 22:54:08.803577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.866 [2024-11-26 22:54:08.803622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:29.866 [2024-11-26 22:54:08.803632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.866 [2024-11-26 22:54:08.805699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.866 [2024-11-26 22:54:08.805738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.866 [2024-11-26 22:54:08.805802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:29.866 [2024-11-26 22:54:08.805846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.866 pt2 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.866 "name": "raid_bdev1", 00:09:29.866 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:29.866 "strip_size_kb": 0, 00:09:29.866 "state": "configuring", 00:09:29.866 "raid_level": "raid1", 00:09:29.866 "superblock": true, 00:09:29.866 "num_base_bdevs": 3, 00:09:29.866 "num_base_bdevs_discovered": 1, 00:09:29.866 "num_base_bdevs_operational": 2, 00:09:29.866 "base_bdevs_list": [ 00:09:29.866 { 00:09:29.866 "name": null, 00:09:29.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.866 "is_configured": false, 00:09:29.866 "data_offset": 2048, 00:09:29.866 "data_size": 63488 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "name": "pt2", 00:09:29.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.866 "is_configured": true, 00:09:29.866 "data_offset": 2048, 00:09:29.866 "data_size": 63488 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "name": null, 00:09:29.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.866 "is_configured": false, 00:09:29.866 "data_offset": 2048, 00:09:29.866 "data_size": 63488 00:09:29.866 } 00:09:29.866 ] 00:09:29.866 }' 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.866 22:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.126 [2024-11-26 22:54:09.179655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.126 [2024-11-26 22:54:09.179722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.126 [2024-11-26 22:54:09.179744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:30.126 [2024-11-26 22:54:09.179757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.126 [2024-11-26 22:54:09.180144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.126 [2024-11-26 22:54:09.180173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.126 [2024-11-26 22:54:09.180259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:30.126 [2024-11-26 22:54:09.180290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.126 [2024-11-26 22:54:09.180384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.126 [2024-11-26 22:54:09.180400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.126 [2024-11-26 22:54:09.180626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:30.126 [2024-11-26 22:54:09.180748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.126 [2024-11-26 22:54:09.180763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:30.126 [2024-11-26 22:54:09.180867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.126 pt3 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.126 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.127 "name": "raid_bdev1", 00:09:30.127 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:30.127 "strip_size_kb": 0, 00:09:30.127 "state": "online", 00:09:30.127 "raid_level": "raid1", 00:09:30.127 "superblock": true, 00:09:30.127 "num_base_bdevs": 3, 00:09:30.127 "num_base_bdevs_discovered": 2, 00:09:30.127 "num_base_bdevs_operational": 2, 00:09:30.127 "base_bdevs_list": [ 00:09:30.127 { 00:09:30.127 "name": null, 00:09:30.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.127 "is_configured": false, 00:09:30.127 "data_offset": 2048, 00:09:30.127 "data_size": 63488 00:09:30.127 }, 00:09:30.127 { 00:09:30.127 "name": "pt2", 00:09:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.127 "is_configured": true, 00:09:30.127 "data_offset": 2048, 00:09:30.127 "data_size": 63488 00:09:30.127 }, 00:09:30.127 { 00:09:30.127 "name": "pt3", 00:09:30.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.127 "is_configured": true, 00:09:30.127 "data_offset": 2048, 00:09:30.127 "data_size": 63488 00:09:30.127 } 00:09:30.127 ] 00:09:30.127 }' 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.127 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.696 [2024-11-26 22:54:09.647757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.696 [2024-11-26 22:54:09.647792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.696 [2024-11-26 22:54:09.647856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.696 [2024-11-26 22:54:09.647922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.696 [2024-11-26 22:54:09.647931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.696 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.696 [2024-11-26 22:54:09.707776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.696 [2024-11-26 22:54:09.707830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.696 [2024-11-26 22:54:09.707850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:30.696 [2024-11-26 22:54:09.707859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.696 [2024-11-26 22:54:09.709937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.696 [2024-11-26 22:54:09.709971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.696 [2024-11-26 22:54:09.710040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:30.696 [2024-11-26 22:54:09.710071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.696 [2024-11-26 22:54:09.710180] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:30.696 [2024-11-26 22:54:09.710207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.696 [2024-11-26 22:54:09.710246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:09:30.696 [2024-11-26 22:54:09.710293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.696 pt1 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.697 "name": "raid_bdev1", 00:09:30.697 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:30.697 "strip_size_kb": 0, 00:09:30.697 "state": "configuring", 00:09:30.697 "raid_level": "raid1", 00:09:30.697 "superblock": true, 00:09:30.697 "num_base_bdevs": 3, 00:09:30.697 "num_base_bdevs_discovered": 1, 00:09:30.697 "num_base_bdevs_operational": 2, 00:09:30.697 "base_bdevs_list": [ 00:09:30.697 { 00:09:30.697 "name": null, 00:09:30.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.697 "is_configured": false, 00:09:30.697 "data_offset": 2048, 00:09:30.697 "data_size": 63488 00:09:30.697 }, 00:09:30.697 { 00:09:30.697 "name": "pt2", 00:09:30.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.697 "is_configured": true, 00:09:30.697 "data_offset": 2048, 00:09:30.697 "data_size": 63488 00:09:30.697 }, 00:09:30.697 { 00:09:30.697 "name": null, 00:09:30.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.697 "is_configured": false, 00:09:30.697 "data_offset": 2048, 00:09:30.697 "data_size": 63488 00:09:30.697 } 00:09:30.697 ] 00:09:30.697 }' 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.697 22:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 [2024-11-26 22:54:10.215985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.266 [2024-11-26 22:54:10.216082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.266 [2024-11-26 22:54:10.216117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:31.266 [2024-11-26 22:54:10.216129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.266 [2024-11-26 22:54:10.216748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.266 [2024-11-26 22:54:10.216788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.266 [2024-11-26 22:54:10.216896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:31.266 [2024-11-26 22:54:10.216936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.266 [2024-11-26 22:54:10.217057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:31.266 [2024-11-26 22:54:10.217076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.266 [2024-11-26 22:54:10.217370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:09:31.266 [2024-11-26 22:54:10.217533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:31.266 [2024-11-26 22:54:10.217553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:31.266 [2024-11-26 22:54:10.217682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.266 pt3 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.266 "name": "raid_bdev1", 00:09:31.266 "uuid": "a48fb232-cab2-499f-8a52-fda876a1cd48", 00:09:31.266 "strip_size_kb": 0, 00:09:31.266 "state": "online", 00:09:31.266 "raid_level": "raid1", 00:09:31.266 "superblock": true, 00:09:31.266 "num_base_bdevs": 3, 00:09:31.267 "num_base_bdevs_discovered": 2, 00:09:31.267 "num_base_bdevs_operational": 2, 00:09:31.267 "base_bdevs_list": [ 00:09:31.267 { 00:09:31.267 "name": null, 00:09:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.267 "is_configured": false, 00:09:31.267 "data_offset": 2048, 00:09:31.267 "data_size": 63488 00:09:31.267 }, 00:09:31.267 { 00:09:31.267 "name": "pt2", 00:09:31.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.267 "is_configured": true, 00:09:31.267 "data_offset": 2048, 00:09:31.267 "data_size": 63488 00:09:31.267 }, 00:09:31.267 { 00:09:31.267 "name": "pt3", 00:09:31.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.267 "is_configured": true, 00:09:31.267 "data_offset": 2048, 00:09:31.267 "data_size": 63488 00:09:31.267 } 00:09:31.267 ] 00:09:31.267 }' 00:09:31.267 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.267 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.525 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:31.525 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:31.525 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:31.784 [2024-11-26 22:54:10.708387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a48fb232-cab2-499f-8a52-fda876a1cd48 '!=' a48fb232-cab2-499f-8a52-fda876a1cd48 ']' 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81256 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81256 ']' 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81256 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81256 00:09:31.784 killing process with pid 81256 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81256' 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81256 00:09:31.784 [2024-11-26 22:54:10.788812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.784 [2024-11-26 22:54:10.788914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.784 22:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81256 00:09:31.784 [2024-11-26 22:54:10.788985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.784 [2024-11-26 22:54:10.789003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:31.784 [2024-11-26 22:54:10.851681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.355 22:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:32.355 00:09:32.355 real 0m6.437s 00:09:32.355 user 0m10.683s 00:09:32.355 sys 0m1.305s 00:09:32.355 22:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.355 22:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.355 ************************************ 00:09:32.355 END TEST raid_superblock_test 00:09:32.355 ************************************ 00:09:32.355 22:54:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:32.355 22:54:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.355 22:54:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.355 22:54:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.355 ************************************ 00:09:32.355 START TEST raid_read_error_test 00:09:32.355 ************************************ 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qOjwjpssXe 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81686 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81686 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81686 ']' 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.355 22:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.355 [2024-11-26 22:54:11.371093] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:32.355 [2024-11-26 22:54:11.371225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81686 ] 00:09:32.615 [2024-11-26 22:54:11.506369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:32.615 [2024-11-26 22:54:11.547265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.615 [2024-11-26 22:54:11.588318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.615 [2024-11-26 22:54:11.665592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.615 [2024-11-26 22:54:11.665641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 BaseBdev1_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 true 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 [2024-11-26 22:54:12.233965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.201 [2024-11-26 22:54:12.234028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.201 [2024-11-26 22:54:12.234046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.201 [2024-11-26 22:54:12.234063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.201 [2024-11-26 22:54:12.236620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.201 [2024-11-26 22:54:12.236660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.201 BaseBdev1 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 BaseBdev2_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 true 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 [2024-11-26 22:54:12.281265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.201 [2024-11-26 22:54:12.281319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.201 [2024-11-26 22:54:12.281338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.201 [2024-11-26 22:54:12.281351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.201 [2024-11-26 22:54:12.283742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.201 [2024-11-26 22:54:12.283780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.201 BaseBdev2 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 BaseBdev3_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.201 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 true 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 [2024-11-26 22:54:12.327981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:33.513 [2024-11-26 22:54:12.328039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.513 [2024-11-26 22:54:12.328059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:33.513 [2024-11-26 22:54:12.328073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.513 [2024-11-26 22:54:12.330504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.513 [2024-11-26 22:54:12.330543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:33.513 BaseBdev3 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 [2024-11-26 22:54:12.340043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.513 [2024-11-26 22:54:12.342186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.513 [2024-11-26 22:54:12.342292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.513 [2024-11-26 22:54:12.342490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.513 [2024-11-26 22:54:12.342513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.513 [2024-11-26 22:54:12.342804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:33.513 [2024-11-26 22:54:12.342982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.513 [2024-11-26 22:54:12.343007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:33.513 [2024-11-26 22:54:12.343152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.513 "name": "raid_bdev1", 00:09:33.513 "uuid": "efc40f62-f3d3-47e7-984b-6ef007880674", 00:09:33.513 "strip_size_kb": 0, 00:09:33.513 "state": "online", 00:09:33.513 "raid_level": "raid1", 00:09:33.513 "superblock": true, 00:09:33.513 "num_base_bdevs": 3, 00:09:33.513 "num_base_bdevs_discovered": 3, 00:09:33.513 "num_base_bdevs_operational": 3, 00:09:33.513 "base_bdevs_list": [ 00:09:33.513 { 00:09:33.513 "name": "BaseBdev1", 00:09:33.513 "uuid": "1b146930-abb5-5a2e-b816-ec5c7313a86b", 00:09:33.513 "is_configured": true, 00:09:33.513 "data_offset": 2048, 00:09:33.513 "data_size": 63488 00:09:33.513 }, 00:09:33.513 { 00:09:33.513 "name": "BaseBdev2", 00:09:33.513 "uuid": "31097ad4-5d3e-5712-b62e-3e57f1e80a84", 00:09:33.513 "is_configured": true, 00:09:33.513 "data_offset": 2048, 00:09:33.513 "data_size": 63488 00:09:33.513 }, 00:09:33.513 { 00:09:33.513 "name": "BaseBdev3", 00:09:33.513 "uuid": "4accaf3f-9df0-57db-8ee9-aeb5e753570d", 00:09:33.513 "is_configured": true, 00:09:33.513 "data_offset": 2048, 00:09:33.513 "data_size": 63488 00:09:33.513 } 00:09:33.513 ] 00:09:33.513 }' 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.513 22:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.772 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.772 22:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:33.772 [2024-11-26 22:54:12.840903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:34.716 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.717 "name": "raid_bdev1", 00:09:34.717 "uuid": "efc40f62-f3d3-47e7-984b-6ef007880674", 00:09:34.717 "strip_size_kb": 0, 00:09:34.717 "state": "online", 00:09:34.717 "raid_level": "raid1", 00:09:34.717 "superblock": true, 00:09:34.717 "num_base_bdevs": 3, 00:09:34.717 "num_base_bdevs_discovered": 3, 00:09:34.717 "num_base_bdevs_operational": 3, 00:09:34.717 "base_bdevs_list": [ 00:09:34.717 { 00:09:34.717 "name": "BaseBdev1", 00:09:34.717 "uuid": "1b146930-abb5-5a2e-b816-ec5c7313a86b", 00:09:34.717 "is_configured": true, 00:09:34.717 "data_offset": 2048, 00:09:34.717 "data_size": 63488 00:09:34.717 }, 00:09:34.717 { 00:09:34.717 "name": "BaseBdev2", 00:09:34.717 "uuid": "31097ad4-5d3e-5712-b62e-3e57f1e80a84", 00:09:34.717 "is_configured": true, 00:09:34.717 "data_offset": 2048, 00:09:34.717 "data_size": 63488 00:09:34.717 }, 00:09:34.717 { 00:09:34.717 "name": "BaseBdev3", 00:09:34.717 "uuid": "4accaf3f-9df0-57db-8ee9-aeb5e753570d", 00:09:34.717 "is_configured": true, 00:09:34.717 "data_offset": 2048, 00:09:34.717 "data_size": 63488 00:09:34.717 } 00:09:34.717 ] 00:09:34.717 }' 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.717 22:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.286 [2024-11-26 22:54:14.228846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.286 [2024-11-26 22:54:14.228892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.286 [2024-11-26 22:54:14.231639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.286 [2024-11-26 22:54:14.231697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.286 [2024-11-26 22:54:14.231840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.286 [2024-11-26 22:54:14.231858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.286 { 00:09:35.286 "results": [ 00:09:35.286 { 00:09:35.286 "job": "raid_bdev1", 00:09:35.286 "core_mask": "0x1", 00:09:35.286 "workload": "randrw", 00:09:35.286 "percentage": 50, 00:09:35.286 "status": "finished", 00:09:35.286 "queue_depth": 1, 00:09:35.286 "io_size": 131072, 00:09:35.286 "runtime": 1.385349, 00:09:35.286 "iops": 10739.53206015235, 00:09:35.286 "mibps": 1342.4415075190439, 00:09:35.286 "io_failed": 0, 00:09:35.286 "io_timeout": 0, 00:09:35.286 "avg_latency_us": 90.36375988140846, 00:09:35.286 "min_latency_us": 23.530231747691236, 00:09:35.286 "max_latency_us": 1434.5635128071092 00:09:35.286 } 00:09:35.286 ], 00:09:35.286 "core_count": 1 00:09:35.286 } 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81686 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81686 ']' 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81686 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81686 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.286 killing process with pid 81686 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81686' 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81686 00:09:35.286 [2024-11-26 22:54:14.274486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.286 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81686 00:09:35.286 [2024-11-26 22:54:14.324952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qOjwjpssXe 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:35.547 00:09:35.547 real 0m3.403s 00:09:35.547 user 0m4.139s 00:09:35.547 sys 0m0.628s 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.547 22:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.547 ************************************ 00:09:35.547 END TEST raid_read_error_test 00:09:35.547 ************************************ 00:09:35.806 22:54:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:35.806 22:54:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.806 22:54:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.806 22:54:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.806 ************************************ 00:09:35.806 START TEST raid_write_error_test 00:09:35.806 ************************************ 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:35.806 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m2QLc14Ob8 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81825 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81825 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81825 ']' 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.807 22:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.807 [2024-11-26 22:54:14.843221] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:35.807 [2024-11-26 22:54:14.843371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81825 ] 00:09:36.067 [2024-11-26 22:54:14.979209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.067 [2024-11-26 22:54:15.018227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.067 [2024-11-26 22:54:15.058785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.067 [2024-11-26 22:54:15.137452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.067 [2024-11-26 22:54:15.137505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.637 BaseBdev1_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.637 true 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.637 [2024-11-26 22:54:15.716846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:36.637 [2024-11-26 22:54:15.716924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.637 [2024-11-26 22:54:15.716945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:36.637 [2024-11-26 22:54:15.716961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.637 [2024-11-26 22:54:15.719446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.637 [2024-11-26 22:54:15.719487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:36.637 BaseBdev1 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.637 BaseBdev2_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.637 true 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.637 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.897 [2024-11-26 22:54:15.764085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:36.897 [2024-11-26 22:54:15.764140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.897 [2024-11-26 22:54:15.764160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:36.897 [2024-11-26 22:54:15.764173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.897 [2024-11-26 22:54:15.766578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.897 [2024-11-26 22:54:15.766618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:36.897 BaseBdev2 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.897 BaseBdev3_malloc 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.897 true 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.897 [2024-11-26 22:54:15.810639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:36.897 [2024-11-26 22:54:15.810693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.897 [2024-11-26 22:54:15.810713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:36.897 [2024-11-26 22:54:15.810726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.897 [2024-11-26 22:54:15.813106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.897 [2024-11-26 22:54:15.813145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:36.897 BaseBdev3 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.897 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.897 [2024-11-26 22:54:15.822704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.897 [2024-11-26 22:54:15.824810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.898 [2024-11-26 22:54:15.824890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.898 [2024-11-26 22:54:15.825104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.898 [2024-11-26 22:54:15.825125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.898 [2024-11-26 22:54:15.825426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:36.898 [2024-11-26 22:54:15.825601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.898 [2024-11-26 22:54:15.825625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:36.898 [2024-11-26 22:54:15.825770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.898 "name": "raid_bdev1", 00:09:36.898 "uuid": "1942ee5b-b2bf-4764-a959-88ae18b16420", 00:09:36.898 "strip_size_kb": 0, 00:09:36.898 "state": "online", 00:09:36.898 "raid_level": "raid1", 00:09:36.898 "superblock": true, 00:09:36.898 "num_base_bdevs": 3, 00:09:36.898 "num_base_bdevs_discovered": 3, 00:09:36.898 "num_base_bdevs_operational": 3, 00:09:36.898 "base_bdevs_list": [ 00:09:36.898 { 00:09:36.898 "name": "BaseBdev1", 00:09:36.898 "uuid": "a6fbe013-b92f-5ed6-8e84-805af626029a", 00:09:36.898 "is_configured": true, 00:09:36.898 "data_offset": 2048, 00:09:36.898 "data_size": 63488 00:09:36.898 }, 00:09:36.898 { 00:09:36.898 "name": "BaseBdev2", 00:09:36.898 "uuid": "02ae60cb-e730-5fb4-b757-8ba0dce55438", 00:09:36.898 "is_configured": true, 00:09:36.898 "data_offset": 2048, 00:09:36.898 "data_size": 63488 00:09:36.898 }, 00:09:36.898 { 00:09:36.898 "name": "BaseBdev3", 00:09:36.898 "uuid": "730f4820-babc-5d2b-81a1-14c4ba5744e8", 00:09:36.898 "is_configured": true, 00:09:36.898 "data_offset": 2048, 00:09:36.898 "data_size": 63488 00:09:36.898 } 00:09:36.898 ] 00:09:36.898 }' 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.898 22:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.157 22:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:37.157 22:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:37.417 [2024-11-26 22:54:16.295287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.358 [2024-11-26 22:54:17.208832] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:38.358 [2024-11-26 22:54:17.208899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.358 [2024-11-26 22:54:17.209143] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006b10 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.358 "name": "raid_bdev1", 00:09:38.358 "uuid": "1942ee5b-b2bf-4764-a959-88ae18b16420", 00:09:38.358 "strip_size_kb": 0, 00:09:38.358 "state": "online", 00:09:38.358 "raid_level": "raid1", 00:09:38.358 "superblock": true, 00:09:38.358 "num_base_bdevs": 3, 00:09:38.358 "num_base_bdevs_discovered": 2, 00:09:38.358 "num_base_bdevs_operational": 2, 00:09:38.358 "base_bdevs_list": [ 00:09:38.358 { 00:09:38.358 "name": null, 00:09:38.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.358 "is_configured": false, 00:09:38.358 "data_offset": 0, 00:09:38.358 "data_size": 63488 00:09:38.358 }, 00:09:38.358 { 00:09:38.358 "name": "BaseBdev2", 00:09:38.358 "uuid": "02ae60cb-e730-5fb4-b757-8ba0dce55438", 00:09:38.358 "is_configured": true, 00:09:38.358 "data_offset": 2048, 00:09:38.358 "data_size": 63488 00:09:38.358 }, 00:09:38.358 { 00:09:38.358 "name": "BaseBdev3", 00:09:38.358 "uuid": "730f4820-babc-5d2b-81a1-14c4ba5744e8", 00:09:38.358 "is_configured": true, 00:09:38.358 "data_offset": 2048, 00:09:38.358 "data_size": 63488 00:09:38.358 } 00:09:38.358 ] 00:09:38.358 }' 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.358 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.618 [2024-11-26 22:54:17.693609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.618 [2024-11-26 22:54:17.693733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.618 [2024-11-26 22:54:17.696470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.618 [2024-11-26 22:54:17.696578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.618 [2024-11-26 22:54:17.696693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.618 [2024-11-26 22:54:17.696764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:38.618 { 00:09:38.618 "results": [ 00:09:38.618 { 00:09:38.618 "job": "raid_bdev1", 00:09:38.618 "core_mask": "0x1", 00:09:38.618 "workload": "randrw", 00:09:38.618 "percentage": 50, 00:09:38.618 "status": "finished", 00:09:38.618 "queue_depth": 1, 00:09:38.618 "io_size": 131072, 00:09:38.618 "runtime": 1.396335, 00:09:38.618 "iops": 12182.606609445442, 00:09:38.618 "mibps": 1522.8258261806802, 00:09:38.618 "io_failed": 0, 00:09:38.618 "io_timeout": 0, 00:09:38.618 "avg_latency_us": 79.32009764980852, 00:09:38.618 "min_latency_us": 23.540486359278304, 00:09:38.618 "max_latency_us": 1428.0484616055087 00:09:38.618 } 00:09:38.618 ], 00:09:38.618 "core_count": 1 00:09:38.618 } 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81825 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81825 ']' 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81825 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81825 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.618 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81825' 00:09:38.878 killing process with pid 81825 00:09:38.878 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81825 00:09:38.878 [2024-11-26 22:54:17.744829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.878 22:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81825 00:09:38.878 [2024-11-26 22:54:17.794397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m2QLc14Ob8 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:39.136 00:09:39.136 real 0m3.402s 00:09:39.136 user 0m4.143s 00:09:39.136 sys 0m0.661s 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.136 ************************************ 00:09:39.136 END TEST raid_write_error_test 00:09:39.136 ************************************ 00:09:39.136 22:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.136 22:54:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:39.136 22:54:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:39.136 22:54:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:39.136 22:54:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.136 22:54:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.136 22:54:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.136 ************************************ 00:09:39.136 START TEST raid_state_function_test 00:09:39.136 ************************************ 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:39.136 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81952 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81952' 00:09:39.137 Process raid pid: 81952 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81952 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81952 ']' 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.137 22:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.395 [2024-11-26 22:54:18.313592] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:39.395 [2024-11-26 22:54:18.313705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.395 [2024-11-26 22:54:18.448495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.395 [2024-11-26 22:54:18.468106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.395 [2024-11-26 22:54:18.509098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.654 [2024-11-26 22:54:18.586245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.654 [2024-11-26 22:54:18.586294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.223 [2024-11-26 22:54:19.165727] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.223 [2024-11-26 22:54:19.165876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.223 [2024-11-26 22:54:19.165918] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.223 [2024-11-26 22:54:19.165944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.223 [2024-11-26 22:54:19.165973] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.223 [2024-11-26 22:54:19.166015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.223 [2024-11-26 22:54:19.166040] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:40.223 [2024-11-26 22:54:19.166065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.223 "name": "Existed_Raid", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.223 "strip_size_kb": 64, 00:09:40.223 "state": "configuring", 00:09:40.223 "raid_level": "raid0", 00:09:40.223 "superblock": false, 00:09:40.223 "num_base_bdevs": 4, 00:09:40.223 "num_base_bdevs_discovered": 0, 00:09:40.223 "num_base_bdevs_operational": 4, 00:09:40.223 "base_bdevs_list": [ 00:09:40.223 { 00:09:40.223 "name": "BaseBdev1", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.223 "is_configured": false, 00:09:40.223 "data_offset": 0, 00:09:40.223 "data_size": 0 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "name": "BaseBdev2", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.223 "is_configured": false, 00:09:40.223 "data_offset": 0, 00:09:40.223 "data_size": 0 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "name": "BaseBdev3", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.223 "is_configured": false, 00:09:40.223 "data_offset": 0, 00:09:40.223 "data_size": 0 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "name": "BaseBdev4", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.223 "is_configured": false, 00:09:40.223 "data_offset": 0, 00:09:40.223 "data_size": 0 00:09:40.223 } 00:09:40.223 ] 00:09:40.223 }' 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.223 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.483 [2024-11-26 22:54:19.593645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.483 [2024-11-26 22:54:19.593738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.483 [2024-11-26 22:54:19.601668] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.483 [2024-11-26 22:54:19.601754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.483 [2024-11-26 22:54:19.601791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.483 [2024-11-26 22:54:19.601816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.483 [2024-11-26 22:54:19.601829] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.483 [2024-11-26 22:54:19.601838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.483 [2024-11-26 22:54:19.601848] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:40.483 [2024-11-26 22:54:19.601857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.483 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 [2024-11-26 22:54:19.624870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.743 BaseBdev1 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 [ 00:09:40.743 { 00:09:40.743 "name": "BaseBdev1", 00:09:40.743 "aliases": [ 00:09:40.743 "c201c5b0-7f74-4b09-82b4-f15a3a50e084" 00:09:40.743 ], 00:09:40.743 "product_name": "Malloc disk", 00:09:40.743 "block_size": 512, 00:09:40.743 "num_blocks": 65536, 00:09:40.743 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:40.743 "assigned_rate_limits": { 00:09:40.743 "rw_ios_per_sec": 0, 00:09:40.743 "rw_mbytes_per_sec": 0, 00:09:40.743 "r_mbytes_per_sec": 0, 00:09:40.743 "w_mbytes_per_sec": 0 00:09:40.743 }, 00:09:40.743 "claimed": true, 00:09:40.743 "claim_type": "exclusive_write", 00:09:40.743 "zoned": false, 00:09:40.743 "supported_io_types": { 00:09:40.743 "read": true, 00:09:40.743 "write": true, 00:09:40.743 "unmap": true, 00:09:40.743 "flush": true, 00:09:40.743 "reset": true, 00:09:40.743 "nvme_admin": false, 00:09:40.743 "nvme_io": false, 00:09:40.743 "nvme_io_md": false, 00:09:40.743 "write_zeroes": true, 00:09:40.743 "zcopy": true, 00:09:40.743 "get_zone_info": false, 00:09:40.743 "zone_management": false, 00:09:40.743 "zone_append": false, 00:09:40.743 "compare": false, 00:09:40.743 "compare_and_write": false, 00:09:40.743 "abort": true, 00:09:40.743 "seek_hole": false, 00:09:40.743 "seek_data": false, 00:09:40.743 "copy": true, 00:09:40.743 "nvme_iov_md": false 00:09:40.743 }, 00:09:40.743 "memory_domains": [ 00:09:40.743 { 00:09:40.743 "dma_device_id": "system", 00:09:40.743 "dma_device_type": 1 00:09:40.743 }, 00:09:40.743 { 00:09:40.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.743 "dma_device_type": 2 00:09:40.743 } 00:09:40.743 ], 00:09:40.743 "driver_specific": {} 00:09:40.743 } 00:09:40.743 ] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.743 "name": "Existed_Raid", 00:09:40.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.743 "strip_size_kb": 64, 00:09:40.743 "state": "configuring", 00:09:40.744 "raid_level": "raid0", 00:09:40.744 "superblock": false, 00:09:40.744 "num_base_bdevs": 4, 00:09:40.744 "num_base_bdevs_discovered": 1, 00:09:40.744 "num_base_bdevs_operational": 4, 00:09:40.744 "base_bdevs_list": [ 00:09:40.744 { 00:09:40.744 "name": "BaseBdev1", 00:09:40.744 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:40.744 "is_configured": true, 00:09:40.744 "data_offset": 0, 00:09:40.744 "data_size": 65536 00:09:40.744 }, 00:09:40.744 { 00:09:40.744 "name": "BaseBdev2", 00:09:40.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.744 "is_configured": false, 00:09:40.744 "data_offset": 0, 00:09:40.744 "data_size": 0 00:09:40.744 }, 00:09:40.744 { 00:09:40.744 "name": "BaseBdev3", 00:09:40.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.744 "is_configured": false, 00:09:40.744 "data_offset": 0, 00:09:40.744 "data_size": 0 00:09:40.744 }, 00:09:40.744 { 00:09:40.744 "name": "BaseBdev4", 00:09:40.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.744 "is_configured": false, 00:09:40.744 "data_offset": 0, 00:09:40.744 "data_size": 0 00:09:40.744 } 00:09:40.744 ] 00:09:40.744 }' 00:09:40.744 22:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.744 22:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 [2024-11-26 22:54:20.061031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.004 [2024-11-26 22:54:20.061154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 [2024-11-26 22:54:20.073079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.004 [2024-11-26 22:54:20.075320] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.004 [2024-11-26 22:54:20.075410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.004 [2024-11-26 22:54:20.075446] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.004 [2024-11-26 22:54:20.075473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.004 [2024-11-26 22:54:20.075497] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.004 [2024-11-26 22:54:20.075537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.004 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.264 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.264 "name": "Existed_Raid", 00:09:41.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.264 "strip_size_kb": 64, 00:09:41.264 "state": "configuring", 00:09:41.264 "raid_level": "raid0", 00:09:41.264 "superblock": false, 00:09:41.264 "num_base_bdevs": 4, 00:09:41.264 "num_base_bdevs_discovered": 1, 00:09:41.264 "num_base_bdevs_operational": 4, 00:09:41.264 "base_bdevs_list": [ 00:09:41.264 { 00:09:41.264 "name": "BaseBdev1", 00:09:41.264 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:41.264 "is_configured": true, 00:09:41.264 "data_offset": 0, 00:09:41.264 "data_size": 65536 00:09:41.264 }, 00:09:41.264 { 00:09:41.264 "name": "BaseBdev2", 00:09:41.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.264 "is_configured": false, 00:09:41.264 "data_offset": 0, 00:09:41.264 "data_size": 0 00:09:41.264 }, 00:09:41.264 { 00:09:41.264 "name": "BaseBdev3", 00:09:41.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.264 "is_configured": false, 00:09:41.264 "data_offset": 0, 00:09:41.264 "data_size": 0 00:09:41.264 }, 00:09:41.264 { 00:09:41.264 "name": "BaseBdev4", 00:09:41.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.264 "is_configured": false, 00:09:41.264 "data_offset": 0, 00:09:41.264 "data_size": 0 00:09:41.264 } 00:09:41.264 ] 00:09:41.264 }' 00:09:41.264 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.264 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.524 [2024-11-26 22:54:20.470021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.524 BaseBdev2 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.524 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.524 [ 00:09:41.524 { 00:09:41.524 "name": "BaseBdev2", 00:09:41.524 "aliases": [ 00:09:41.524 "ebd06a8e-ee02-425c-a92d-c42b9ab885fb" 00:09:41.524 ], 00:09:41.524 "product_name": "Malloc disk", 00:09:41.524 "block_size": 512, 00:09:41.524 "num_blocks": 65536, 00:09:41.524 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:41.524 "assigned_rate_limits": { 00:09:41.524 "rw_ios_per_sec": 0, 00:09:41.524 "rw_mbytes_per_sec": 0, 00:09:41.524 "r_mbytes_per_sec": 0, 00:09:41.524 "w_mbytes_per_sec": 0 00:09:41.524 }, 00:09:41.524 "claimed": true, 00:09:41.524 "claim_type": "exclusive_write", 00:09:41.524 "zoned": false, 00:09:41.524 "supported_io_types": { 00:09:41.524 "read": true, 00:09:41.524 "write": true, 00:09:41.524 "unmap": true, 00:09:41.524 "flush": true, 00:09:41.524 "reset": true, 00:09:41.524 "nvme_admin": false, 00:09:41.524 "nvme_io": false, 00:09:41.524 "nvme_io_md": false, 00:09:41.524 "write_zeroes": true, 00:09:41.524 "zcopy": true, 00:09:41.524 "get_zone_info": false, 00:09:41.524 "zone_management": false, 00:09:41.524 "zone_append": false, 00:09:41.524 "compare": false, 00:09:41.524 "compare_and_write": false, 00:09:41.525 "abort": true, 00:09:41.525 "seek_hole": false, 00:09:41.525 "seek_data": false, 00:09:41.525 "copy": true, 00:09:41.525 "nvme_iov_md": false 00:09:41.525 }, 00:09:41.525 "memory_domains": [ 00:09:41.525 { 00:09:41.525 "dma_device_id": "system", 00:09:41.525 "dma_device_type": 1 00:09:41.525 }, 00:09:41.525 { 00:09:41.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.525 "dma_device_type": 2 00:09:41.525 } 00:09:41.525 ], 00:09:41.525 "driver_specific": {} 00:09:41.525 } 00:09:41.525 ] 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.525 "name": "Existed_Raid", 00:09:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.525 "strip_size_kb": 64, 00:09:41.525 "state": "configuring", 00:09:41.525 "raid_level": "raid0", 00:09:41.525 "superblock": false, 00:09:41.525 "num_base_bdevs": 4, 00:09:41.525 "num_base_bdevs_discovered": 2, 00:09:41.525 "num_base_bdevs_operational": 4, 00:09:41.525 "base_bdevs_list": [ 00:09:41.525 { 00:09:41.525 "name": "BaseBdev1", 00:09:41.525 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:41.525 "is_configured": true, 00:09:41.525 "data_offset": 0, 00:09:41.525 "data_size": 65536 00:09:41.525 }, 00:09:41.525 { 00:09:41.525 "name": "BaseBdev2", 00:09:41.525 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:41.525 "is_configured": true, 00:09:41.525 "data_offset": 0, 00:09:41.525 "data_size": 65536 00:09:41.525 }, 00:09:41.525 { 00:09:41.525 "name": "BaseBdev3", 00:09:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.525 "is_configured": false, 00:09:41.525 "data_offset": 0, 00:09:41.525 "data_size": 0 00:09:41.525 }, 00:09:41.525 { 00:09:41.525 "name": "BaseBdev4", 00:09:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.525 "is_configured": false, 00:09:41.525 "data_offset": 0, 00:09:41.525 "data_size": 0 00:09:41.525 } 00:09:41.525 ] 00:09:41.525 }' 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.525 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 [2024-11-26 22:54:20.984832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.091 BaseBdev3 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 22:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 [ 00:09:42.091 { 00:09:42.091 "name": "BaseBdev3", 00:09:42.091 "aliases": [ 00:09:42.091 "0df6721b-ffbd-48d4-bc35-0280ea18ff94" 00:09:42.091 ], 00:09:42.091 "product_name": "Malloc disk", 00:09:42.091 "block_size": 512, 00:09:42.091 "num_blocks": 65536, 00:09:42.091 "uuid": "0df6721b-ffbd-48d4-bc35-0280ea18ff94", 00:09:42.091 "assigned_rate_limits": { 00:09:42.091 "rw_ios_per_sec": 0, 00:09:42.091 "rw_mbytes_per_sec": 0, 00:09:42.091 "r_mbytes_per_sec": 0, 00:09:42.091 "w_mbytes_per_sec": 0 00:09:42.091 }, 00:09:42.091 "claimed": true, 00:09:42.091 "claim_type": "exclusive_write", 00:09:42.091 "zoned": false, 00:09:42.091 "supported_io_types": { 00:09:42.091 "read": true, 00:09:42.091 "write": true, 00:09:42.091 "unmap": true, 00:09:42.091 "flush": true, 00:09:42.091 "reset": true, 00:09:42.091 "nvme_admin": false, 00:09:42.091 "nvme_io": false, 00:09:42.091 "nvme_io_md": false, 00:09:42.091 "write_zeroes": true, 00:09:42.091 "zcopy": true, 00:09:42.091 "get_zone_info": false, 00:09:42.091 "zone_management": false, 00:09:42.091 "zone_append": false, 00:09:42.091 "compare": false, 00:09:42.091 "compare_and_write": false, 00:09:42.091 "abort": true, 00:09:42.091 "seek_hole": false, 00:09:42.091 "seek_data": false, 00:09:42.091 "copy": true, 00:09:42.091 "nvme_iov_md": false 00:09:42.091 }, 00:09:42.091 "memory_domains": [ 00:09:42.091 { 00:09:42.091 "dma_device_id": "system", 00:09:42.091 "dma_device_type": 1 00:09:42.091 }, 00:09:42.091 { 00:09:42.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.091 "dma_device_type": 2 00:09:42.091 } 00:09:42.091 ], 00:09:42.091 "driver_specific": {} 00:09:42.091 } 00:09:42.091 ] 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.091 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.091 "name": "Existed_Raid", 00:09:42.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.091 "strip_size_kb": 64, 00:09:42.091 "state": "configuring", 00:09:42.091 "raid_level": "raid0", 00:09:42.091 "superblock": false, 00:09:42.091 "num_base_bdevs": 4, 00:09:42.091 "num_base_bdevs_discovered": 3, 00:09:42.091 "num_base_bdevs_operational": 4, 00:09:42.091 "base_bdevs_list": [ 00:09:42.091 { 00:09:42.091 "name": "BaseBdev1", 00:09:42.091 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:42.091 "is_configured": true, 00:09:42.091 "data_offset": 0, 00:09:42.091 "data_size": 65536 00:09:42.091 }, 00:09:42.091 { 00:09:42.091 "name": "BaseBdev2", 00:09:42.091 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:42.091 "is_configured": true, 00:09:42.091 "data_offset": 0, 00:09:42.091 "data_size": 65536 00:09:42.091 }, 00:09:42.091 { 00:09:42.091 "name": "BaseBdev3", 00:09:42.091 "uuid": "0df6721b-ffbd-48d4-bc35-0280ea18ff94", 00:09:42.091 "is_configured": true, 00:09:42.091 "data_offset": 0, 00:09:42.091 "data_size": 65536 00:09:42.091 }, 00:09:42.091 { 00:09:42.091 "name": "BaseBdev4", 00:09:42.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.091 "is_configured": false, 00:09:42.092 "data_offset": 0, 00:09:42.092 "data_size": 0 00:09:42.092 } 00:09:42.092 ] 00:09:42.092 }' 00:09:42.092 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.092 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.347 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:42.347 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.347 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.606 [2024-11-26 22:54:21.477882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:42.606 [2024-11-26 22:54:21.478007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:42.606 [2024-11-26 22:54:21.478044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:42.606 [2024-11-26 22:54:21.478480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:42.606 [2024-11-26 22:54:21.478708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:42.606 BaseBdev4 00:09:42.606 [2024-11-26 22:54:21.478757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:42.606 [2024-11-26 22:54:21.479036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.606 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.606 [ 00:09:42.606 { 00:09:42.606 "name": "BaseBdev4", 00:09:42.606 "aliases": [ 00:09:42.607 "f69ecc2c-118c-4853-bd1b-af013360a9de" 00:09:42.607 ], 00:09:42.607 "product_name": "Malloc disk", 00:09:42.607 "block_size": 512, 00:09:42.607 "num_blocks": 65536, 00:09:42.607 "uuid": "f69ecc2c-118c-4853-bd1b-af013360a9de", 00:09:42.607 "assigned_rate_limits": { 00:09:42.607 "rw_ios_per_sec": 0, 00:09:42.607 "rw_mbytes_per_sec": 0, 00:09:42.607 "r_mbytes_per_sec": 0, 00:09:42.607 "w_mbytes_per_sec": 0 00:09:42.607 }, 00:09:42.607 "claimed": true, 00:09:42.607 "claim_type": "exclusive_write", 00:09:42.607 "zoned": false, 00:09:42.607 "supported_io_types": { 00:09:42.607 "read": true, 00:09:42.607 "write": true, 00:09:42.607 "unmap": true, 00:09:42.607 "flush": true, 00:09:42.607 "reset": true, 00:09:42.607 "nvme_admin": false, 00:09:42.607 "nvme_io": false, 00:09:42.607 "nvme_io_md": false, 00:09:42.607 "write_zeroes": true, 00:09:42.607 "zcopy": true, 00:09:42.607 "get_zone_info": false, 00:09:42.607 "zone_management": false, 00:09:42.607 "zone_append": false, 00:09:42.607 "compare": false, 00:09:42.607 "compare_and_write": false, 00:09:42.607 "abort": true, 00:09:42.607 "seek_hole": false, 00:09:42.607 "seek_data": false, 00:09:42.607 "copy": true, 00:09:42.607 "nvme_iov_md": false 00:09:42.607 }, 00:09:42.607 "memory_domains": [ 00:09:42.607 { 00:09:42.607 "dma_device_id": "system", 00:09:42.607 "dma_device_type": 1 00:09:42.607 }, 00:09:42.607 { 00:09:42.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.607 "dma_device_type": 2 00:09:42.607 } 00:09:42.607 ], 00:09:42.607 "driver_specific": {} 00:09:42.607 } 00:09:42.607 ] 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.607 "name": "Existed_Raid", 00:09:42.607 "uuid": "770c51b7-8d01-4c75-a115-ed2b0c08709e", 00:09:42.607 "strip_size_kb": 64, 00:09:42.607 "state": "online", 00:09:42.607 "raid_level": "raid0", 00:09:42.607 "superblock": false, 00:09:42.607 "num_base_bdevs": 4, 00:09:42.607 "num_base_bdevs_discovered": 4, 00:09:42.607 "num_base_bdevs_operational": 4, 00:09:42.607 "base_bdevs_list": [ 00:09:42.607 { 00:09:42.607 "name": "BaseBdev1", 00:09:42.607 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:42.607 "is_configured": true, 00:09:42.607 "data_offset": 0, 00:09:42.607 "data_size": 65536 00:09:42.607 }, 00:09:42.607 { 00:09:42.607 "name": "BaseBdev2", 00:09:42.607 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:42.607 "is_configured": true, 00:09:42.607 "data_offset": 0, 00:09:42.607 "data_size": 65536 00:09:42.607 }, 00:09:42.607 { 00:09:42.607 "name": "BaseBdev3", 00:09:42.607 "uuid": "0df6721b-ffbd-48d4-bc35-0280ea18ff94", 00:09:42.607 "is_configured": true, 00:09:42.607 "data_offset": 0, 00:09:42.607 "data_size": 65536 00:09:42.607 }, 00:09:42.607 { 00:09:42.607 "name": "BaseBdev4", 00:09:42.607 "uuid": "f69ecc2c-118c-4853-bd1b-af013360a9de", 00:09:42.607 "is_configured": true, 00:09:42.607 "data_offset": 0, 00:09:42.607 "data_size": 65536 00:09:42.607 } 00:09:42.607 ] 00:09:42.607 }' 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.607 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.866 22:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.125 [2024-11-26 22:54:21.998439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.125 "name": "Existed_Raid", 00:09:43.125 "aliases": [ 00:09:43.125 "770c51b7-8d01-4c75-a115-ed2b0c08709e" 00:09:43.125 ], 00:09:43.125 "product_name": "Raid Volume", 00:09:43.125 "block_size": 512, 00:09:43.125 "num_blocks": 262144, 00:09:43.125 "uuid": "770c51b7-8d01-4c75-a115-ed2b0c08709e", 00:09:43.125 "assigned_rate_limits": { 00:09:43.125 "rw_ios_per_sec": 0, 00:09:43.125 "rw_mbytes_per_sec": 0, 00:09:43.125 "r_mbytes_per_sec": 0, 00:09:43.125 "w_mbytes_per_sec": 0 00:09:43.125 }, 00:09:43.125 "claimed": false, 00:09:43.125 "zoned": false, 00:09:43.125 "supported_io_types": { 00:09:43.125 "read": true, 00:09:43.125 "write": true, 00:09:43.125 "unmap": true, 00:09:43.125 "flush": true, 00:09:43.125 "reset": true, 00:09:43.125 "nvme_admin": false, 00:09:43.125 "nvme_io": false, 00:09:43.125 "nvme_io_md": false, 00:09:43.125 "write_zeroes": true, 00:09:43.125 "zcopy": false, 00:09:43.125 "get_zone_info": false, 00:09:43.125 "zone_management": false, 00:09:43.125 "zone_append": false, 00:09:43.125 "compare": false, 00:09:43.125 "compare_and_write": false, 00:09:43.125 "abort": false, 00:09:43.125 "seek_hole": false, 00:09:43.125 "seek_data": false, 00:09:43.125 "copy": false, 00:09:43.125 "nvme_iov_md": false 00:09:43.125 }, 00:09:43.125 "memory_domains": [ 00:09:43.125 { 00:09:43.125 "dma_device_id": "system", 00:09:43.125 "dma_device_type": 1 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.125 "dma_device_type": 2 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "system", 00:09:43.125 "dma_device_type": 1 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.125 "dma_device_type": 2 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "system", 00:09:43.125 "dma_device_type": 1 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.125 "dma_device_type": 2 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "system", 00:09:43.125 "dma_device_type": 1 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.125 "dma_device_type": 2 00:09:43.125 } 00:09:43.125 ], 00:09:43.125 "driver_specific": { 00:09:43.125 "raid": { 00:09:43.125 "uuid": "770c51b7-8d01-4c75-a115-ed2b0c08709e", 00:09:43.125 "strip_size_kb": 64, 00:09:43.125 "state": "online", 00:09:43.125 "raid_level": "raid0", 00:09:43.125 "superblock": false, 00:09:43.125 "num_base_bdevs": 4, 00:09:43.125 "num_base_bdevs_discovered": 4, 00:09:43.125 "num_base_bdevs_operational": 4, 00:09:43.125 "base_bdevs_list": [ 00:09:43.125 { 00:09:43.125 "name": "BaseBdev1", 00:09:43.125 "uuid": "c201c5b0-7f74-4b09-82b4-f15a3a50e084", 00:09:43.125 "is_configured": true, 00:09:43.125 "data_offset": 0, 00:09:43.125 "data_size": 65536 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "name": "BaseBdev2", 00:09:43.125 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:43.125 "is_configured": true, 00:09:43.125 "data_offset": 0, 00:09:43.125 "data_size": 65536 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "name": "BaseBdev3", 00:09:43.125 "uuid": "0df6721b-ffbd-48d4-bc35-0280ea18ff94", 00:09:43.125 "is_configured": true, 00:09:43.125 "data_offset": 0, 00:09:43.125 "data_size": 65536 00:09:43.125 }, 00:09:43.125 { 00:09:43.125 "name": "BaseBdev4", 00:09:43.125 "uuid": "f69ecc2c-118c-4853-bd1b-af013360a9de", 00:09:43.125 "is_configured": true, 00:09:43.125 "data_offset": 0, 00:09:43.125 "data_size": 65536 00:09:43.125 } 00:09:43.125 ] 00:09:43.125 } 00:09:43.125 } 00:09:43.125 }' 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:43.125 BaseBdev2 00:09:43.125 BaseBdev3 00:09:43.125 BaseBdev4' 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.125 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.126 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.384 [2024-11-26 22:54:22.334214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.384 [2024-11-26 22:54:22.334316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.384 [2024-11-26 22:54:22.334409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:43.384 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.385 "name": "Existed_Raid", 00:09:43.385 "uuid": "770c51b7-8d01-4c75-a115-ed2b0c08709e", 00:09:43.385 "strip_size_kb": 64, 00:09:43.385 "state": "offline", 00:09:43.385 "raid_level": "raid0", 00:09:43.385 "superblock": false, 00:09:43.385 "num_base_bdevs": 4, 00:09:43.385 "num_base_bdevs_discovered": 3, 00:09:43.385 "num_base_bdevs_operational": 3, 00:09:43.385 "base_bdevs_list": [ 00:09:43.385 { 00:09:43.385 "name": null, 00:09:43.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.385 "is_configured": false, 00:09:43.385 "data_offset": 0, 00:09:43.385 "data_size": 65536 00:09:43.385 }, 00:09:43.385 { 00:09:43.385 "name": "BaseBdev2", 00:09:43.385 "uuid": "ebd06a8e-ee02-425c-a92d-c42b9ab885fb", 00:09:43.385 "is_configured": true, 00:09:43.385 "data_offset": 0, 00:09:43.385 "data_size": 65536 00:09:43.385 }, 00:09:43.385 { 00:09:43.385 "name": "BaseBdev3", 00:09:43.385 "uuid": "0df6721b-ffbd-48d4-bc35-0280ea18ff94", 00:09:43.385 "is_configured": true, 00:09:43.385 "data_offset": 0, 00:09:43.385 "data_size": 65536 00:09:43.385 }, 00:09:43.385 { 00:09:43.385 "name": "BaseBdev4", 00:09:43.385 "uuid": "f69ecc2c-118c-4853-bd1b-af013360a9de", 00:09:43.385 "is_configured": true, 00:09:43.385 "data_offset": 0, 00:09:43.385 "data_size": 65536 00:09:43.385 } 00:09:43.385 ] 00:09:43.385 }' 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.385 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.644 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 [2024-11-26 22:54:22.778983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.903 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 [2024-11-26 22:54:22.859867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 [2024-11-26 22:54:22.936853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:43.904 [2024-11-26 22:54:22.936992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.904 22:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 BaseBdev2 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.904 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 [ 00:09:44.166 { 00:09:44.166 "name": "BaseBdev2", 00:09:44.166 "aliases": [ 00:09:44.166 "6b49c371-8a35-40d5-a1fb-36e51f8763c3" 00:09:44.166 ], 00:09:44.166 "product_name": "Malloc disk", 00:09:44.166 "block_size": 512, 00:09:44.166 "num_blocks": 65536, 00:09:44.166 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:44.166 "assigned_rate_limits": { 00:09:44.166 "rw_ios_per_sec": 0, 00:09:44.166 "rw_mbytes_per_sec": 0, 00:09:44.166 "r_mbytes_per_sec": 0, 00:09:44.166 "w_mbytes_per_sec": 0 00:09:44.166 }, 00:09:44.166 "claimed": false, 00:09:44.166 "zoned": false, 00:09:44.166 "supported_io_types": { 00:09:44.166 "read": true, 00:09:44.166 "write": true, 00:09:44.166 "unmap": true, 00:09:44.166 "flush": true, 00:09:44.166 "reset": true, 00:09:44.166 "nvme_admin": false, 00:09:44.166 "nvme_io": false, 00:09:44.166 "nvme_io_md": false, 00:09:44.166 "write_zeroes": true, 00:09:44.166 "zcopy": true, 00:09:44.166 "get_zone_info": false, 00:09:44.166 "zone_management": false, 00:09:44.166 "zone_append": false, 00:09:44.166 "compare": false, 00:09:44.166 "compare_and_write": false, 00:09:44.166 "abort": true, 00:09:44.166 "seek_hole": false, 00:09:44.166 "seek_data": false, 00:09:44.166 "copy": true, 00:09:44.166 "nvme_iov_md": false 00:09:44.166 }, 00:09:44.166 "memory_domains": [ 00:09:44.166 { 00:09:44.166 "dma_device_id": "system", 00:09:44.166 "dma_device_type": 1 00:09:44.166 }, 00:09:44.166 { 00:09:44.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.166 "dma_device_type": 2 00:09:44.166 } 00:09:44.166 ], 00:09:44.166 "driver_specific": {} 00:09:44.166 } 00:09:44.166 ] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 BaseBdev3 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.166 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 [ 00:09:44.166 { 00:09:44.166 "name": "BaseBdev3", 00:09:44.166 "aliases": [ 00:09:44.166 "81183dda-8ed6-472e-9db1-222170f6cba4" 00:09:44.166 ], 00:09:44.166 "product_name": "Malloc disk", 00:09:44.166 "block_size": 512, 00:09:44.166 "num_blocks": 65536, 00:09:44.166 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:44.167 "assigned_rate_limits": { 00:09:44.167 "rw_ios_per_sec": 0, 00:09:44.167 "rw_mbytes_per_sec": 0, 00:09:44.167 "r_mbytes_per_sec": 0, 00:09:44.167 "w_mbytes_per_sec": 0 00:09:44.167 }, 00:09:44.167 "claimed": false, 00:09:44.167 "zoned": false, 00:09:44.167 "supported_io_types": { 00:09:44.167 "read": true, 00:09:44.167 "write": true, 00:09:44.167 "unmap": true, 00:09:44.167 "flush": true, 00:09:44.167 "reset": true, 00:09:44.167 "nvme_admin": false, 00:09:44.167 "nvme_io": false, 00:09:44.167 "nvme_io_md": false, 00:09:44.167 "write_zeroes": true, 00:09:44.167 "zcopy": true, 00:09:44.167 "get_zone_info": false, 00:09:44.167 "zone_management": false, 00:09:44.167 "zone_append": false, 00:09:44.167 "compare": false, 00:09:44.167 "compare_and_write": false, 00:09:44.167 "abort": true, 00:09:44.167 "seek_hole": false, 00:09:44.167 "seek_data": false, 00:09:44.167 "copy": true, 00:09:44.167 "nvme_iov_md": false 00:09:44.167 }, 00:09:44.167 "memory_domains": [ 00:09:44.167 { 00:09:44.167 "dma_device_id": "system", 00:09:44.167 "dma_device_type": 1 00:09:44.167 }, 00:09:44.167 { 00:09:44.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.167 "dma_device_type": 2 00:09:44.167 } 00:09:44.167 ], 00:09:44.167 "driver_specific": {} 00:09:44.167 } 00:09:44.167 ] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.167 BaseBdev4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.167 [ 00:09:44.167 { 00:09:44.167 "name": "BaseBdev4", 00:09:44.167 "aliases": [ 00:09:44.167 "58644cc8-1a21-47d8-9ab7-96f00c55ca04" 00:09:44.167 ], 00:09:44.167 "product_name": "Malloc disk", 00:09:44.167 "block_size": 512, 00:09:44.167 "num_blocks": 65536, 00:09:44.167 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:44.167 "assigned_rate_limits": { 00:09:44.167 "rw_ios_per_sec": 0, 00:09:44.167 "rw_mbytes_per_sec": 0, 00:09:44.167 "r_mbytes_per_sec": 0, 00:09:44.167 "w_mbytes_per_sec": 0 00:09:44.167 }, 00:09:44.167 "claimed": false, 00:09:44.167 "zoned": false, 00:09:44.167 "supported_io_types": { 00:09:44.167 "read": true, 00:09:44.167 "write": true, 00:09:44.167 "unmap": true, 00:09:44.167 "flush": true, 00:09:44.167 "reset": true, 00:09:44.167 "nvme_admin": false, 00:09:44.167 "nvme_io": false, 00:09:44.167 "nvme_io_md": false, 00:09:44.167 "write_zeroes": true, 00:09:44.167 "zcopy": true, 00:09:44.167 "get_zone_info": false, 00:09:44.167 "zone_management": false, 00:09:44.167 "zone_append": false, 00:09:44.167 "compare": false, 00:09:44.167 "compare_and_write": false, 00:09:44.167 "abort": true, 00:09:44.167 "seek_hole": false, 00:09:44.167 "seek_data": false, 00:09:44.167 "copy": true, 00:09:44.167 "nvme_iov_md": false 00:09:44.167 }, 00:09:44.167 "memory_domains": [ 00:09:44.167 { 00:09:44.167 "dma_device_id": "system", 00:09:44.167 "dma_device_type": 1 00:09:44.167 }, 00:09:44.167 { 00:09:44.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.167 "dma_device_type": 2 00:09:44.167 } 00:09:44.167 ], 00:09:44.167 "driver_specific": {} 00:09:44.167 } 00:09:44.167 ] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.167 [2024-11-26 22:54:23.186695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.167 [2024-11-26 22:54:23.186805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.167 [2024-11-26 22:54:23.186860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.167 [2024-11-26 22:54:23.189065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.167 [2024-11-26 22:54:23.189189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.167 "name": "Existed_Raid", 00:09:44.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.167 "strip_size_kb": 64, 00:09:44.167 "state": "configuring", 00:09:44.167 "raid_level": "raid0", 00:09:44.167 "superblock": false, 00:09:44.167 "num_base_bdevs": 4, 00:09:44.167 "num_base_bdevs_discovered": 3, 00:09:44.167 "num_base_bdevs_operational": 4, 00:09:44.167 "base_bdevs_list": [ 00:09:44.167 { 00:09:44.167 "name": "BaseBdev1", 00:09:44.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.167 "is_configured": false, 00:09:44.167 "data_offset": 0, 00:09:44.167 "data_size": 0 00:09:44.167 }, 00:09:44.167 { 00:09:44.167 "name": "BaseBdev2", 00:09:44.167 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:44.167 "is_configured": true, 00:09:44.167 "data_offset": 0, 00:09:44.167 "data_size": 65536 00:09:44.167 }, 00:09:44.167 { 00:09:44.167 "name": "BaseBdev3", 00:09:44.167 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:44.167 "is_configured": true, 00:09:44.167 "data_offset": 0, 00:09:44.167 "data_size": 65536 00:09:44.167 }, 00:09:44.167 { 00:09:44.167 "name": "BaseBdev4", 00:09:44.167 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:44.167 "is_configured": true, 00:09:44.167 "data_offset": 0, 00:09:44.167 "data_size": 65536 00:09:44.167 } 00:09:44.167 ] 00:09:44.167 }' 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.167 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.736 [2024-11-26 22:54:23.646805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.736 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.737 "name": "Existed_Raid", 00:09:44.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.737 "strip_size_kb": 64, 00:09:44.737 "state": "configuring", 00:09:44.737 "raid_level": "raid0", 00:09:44.737 "superblock": false, 00:09:44.737 "num_base_bdevs": 4, 00:09:44.737 "num_base_bdevs_discovered": 2, 00:09:44.737 "num_base_bdevs_operational": 4, 00:09:44.737 "base_bdevs_list": [ 00:09:44.737 { 00:09:44.737 "name": "BaseBdev1", 00:09:44.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.737 "is_configured": false, 00:09:44.737 "data_offset": 0, 00:09:44.737 "data_size": 0 00:09:44.737 }, 00:09:44.737 { 00:09:44.737 "name": null, 00:09:44.737 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:44.737 "is_configured": false, 00:09:44.737 "data_offset": 0, 00:09:44.737 "data_size": 65536 00:09:44.737 }, 00:09:44.737 { 00:09:44.737 "name": "BaseBdev3", 00:09:44.737 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:44.737 "is_configured": true, 00:09:44.737 "data_offset": 0, 00:09:44.737 "data_size": 65536 00:09:44.737 }, 00:09:44.737 { 00:09:44.737 "name": "BaseBdev4", 00:09:44.737 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:44.737 "is_configured": true, 00:09:44.737 "data_offset": 0, 00:09:44.737 "data_size": 65536 00:09:44.737 } 00:09:44.737 ] 00:09:44.737 }' 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.737 22:54:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 [2024-11-26 22:54:24.187800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.304 BaseBdev1 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.304 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 [ 00:09:45.304 { 00:09:45.304 "name": "BaseBdev1", 00:09:45.304 "aliases": [ 00:09:45.304 "f06cc3be-242b-426e-ac66-111dc9d5921b" 00:09:45.304 ], 00:09:45.304 "product_name": "Malloc disk", 00:09:45.304 "block_size": 512, 00:09:45.304 "num_blocks": 65536, 00:09:45.304 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:45.304 "assigned_rate_limits": { 00:09:45.304 "rw_ios_per_sec": 0, 00:09:45.304 "rw_mbytes_per_sec": 0, 00:09:45.304 "r_mbytes_per_sec": 0, 00:09:45.304 "w_mbytes_per_sec": 0 00:09:45.304 }, 00:09:45.304 "claimed": true, 00:09:45.304 "claim_type": "exclusive_write", 00:09:45.304 "zoned": false, 00:09:45.304 "supported_io_types": { 00:09:45.304 "read": true, 00:09:45.304 "write": true, 00:09:45.304 "unmap": true, 00:09:45.304 "flush": true, 00:09:45.304 "reset": true, 00:09:45.304 "nvme_admin": false, 00:09:45.304 "nvme_io": false, 00:09:45.304 "nvme_io_md": false, 00:09:45.304 "write_zeroes": true, 00:09:45.304 "zcopy": true, 00:09:45.304 "get_zone_info": false, 00:09:45.304 "zone_management": false, 00:09:45.304 "zone_append": false, 00:09:45.304 "compare": false, 00:09:45.305 "compare_and_write": false, 00:09:45.305 "abort": true, 00:09:45.305 "seek_hole": false, 00:09:45.305 "seek_data": false, 00:09:45.305 "copy": true, 00:09:45.305 "nvme_iov_md": false 00:09:45.305 }, 00:09:45.305 "memory_domains": [ 00:09:45.305 { 00:09:45.305 "dma_device_id": "system", 00:09:45.305 "dma_device_type": 1 00:09:45.305 }, 00:09:45.305 { 00:09:45.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.305 "dma_device_type": 2 00:09:45.305 } 00:09:45.305 ], 00:09:45.305 "driver_specific": {} 00:09:45.305 } 00:09:45.305 ] 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.305 "name": "Existed_Raid", 00:09:45.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.305 "strip_size_kb": 64, 00:09:45.305 "state": "configuring", 00:09:45.305 "raid_level": "raid0", 00:09:45.305 "superblock": false, 00:09:45.305 "num_base_bdevs": 4, 00:09:45.305 "num_base_bdevs_discovered": 3, 00:09:45.305 "num_base_bdevs_operational": 4, 00:09:45.305 "base_bdevs_list": [ 00:09:45.305 { 00:09:45.305 "name": "BaseBdev1", 00:09:45.305 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:45.305 "is_configured": true, 00:09:45.305 "data_offset": 0, 00:09:45.305 "data_size": 65536 00:09:45.305 }, 00:09:45.305 { 00:09:45.305 "name": null, 00:09:45.305 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:45.305 "is_configured": false, 00:09:45.305 "data_offset": 0, 00:09:45.305 "data_size": 65536 00:09:45.305 }, 00:09:45.305 { 00:09:45.305 "name": "BaseBdev3", 00:09:45.305 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:45.305 "is_configured": true, 00:09:45.305 "data_offset": 0, 00:09:45.305 "data_size": 65536 00:09:45.305 }, 00:09:45.305 { 00:09:45.305 "name": "BaseBdev4", 00:09:45.305 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:45.305 "is_configured": true, 00:09:45.305 "data_offset": 0, 00:09:45.305 "data_size": 65536 00:09:45.305 } 00:09:45.305 ] 00:09:45.305 }' 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.305 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.872 [2024-11-26 22:54:24.736008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.872 "name": "Existed_Raid", 00:09:45.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.872 "strip_size_kb": 64, 00:09:45.872 "state": "configuring", 00:09:45.872 "raid_level": "raid0", 00:09:45.872 "superblock": false, 00:09:45.872 "num_base_bdevs": 4, 00:09:45.872 "num_base_bdevs_discovered": 2, 00:09:45.872 "num_base_bdevs_operational": 4, 00:09:45.872 "base_bdevs_list": [ 00:09:45.872 { 00:09:45.872 "name": "BaseBdev1", 00:09:45.872 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:45.872 "is_configured": true, 00:09:45.872 "data_offset": 0, 00:09:45.872 "data_size": 65536 00:09:45.872 }, 00:09:45.872 { 00:09:45.872 "name": null, 00:09:45.872 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:45.872 "is_configured": false, 00:09:45.872 "data_offset": 0, 00:09:45.872 "data_size": 65536 00:09:45.872 }, 00:09:45.872 { 00:09:45.872 "name": null, 00:09:45.872 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:45.872 "is_configured": false, 00:09:45.872 "data_offset": 0, 00:09:45.872 "data_size": 65536 00:09:45.872 }, 00:09:45.872 { 00:09:45.872 "name": "BaseBdev4", 00:09:45.872 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:45.872 "is_configured": true, 00:09:45.872 "data_offset": 0, 00:09:45.872 "data_size": 65536 00:09:45.872 } 00:09:45.872 ] 00:09:45.872 }' 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.872 22:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.132 [2024-11-26 22:54:25.180135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.132 "name": "Existed_Raid", 00:09:46.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.132 "strip_size_kb": 64, 00:09:46.132 "state": "configuring", 00:09:46.132 "raid_level": "raid0", 00:09:46.132 "superblock": false, 00:09:46.132 "num_base_bdevs": 4, 00:09:46.132 "num_base_bdevs_discovered": 3, 00:09:46.132 "num_base_bdevs_operational": 4, 00:09:46.132 "base_bdevs_list": [ 00:09:46.132 { 00:09:46.132 "name": "BaseBdev1", 00:09:46.132 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:46.132 "is_configured": true, 00:09:46.132 "data_offset": 0, 00:09:46.132 "data_size": 65536 00:09:46.132 }, 00:09:46.132 { 00:09:46.132 "name": null, 00:09:46.132 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:46.132 "is_configured": false, 00:09:46.132 "data_offset": 0, 00:09:46.132 "data_size": 65536 00:09:46.132 }, 00:09:46.132 { 00:09:46.132 "name": "BaseBdev3", 00:09:46.132 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:46.132 "is_configured": true, 00:09:46.132 "data_offset": 0, 00:09:46.132 "data_size": 65536 00:09:46.132 }, 00:09:46.132 { 00:09:46.132 "name": "BaseBdev4", 00:09:46.132 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:46.132 "is_configured": true, 00:09:46.132 "data_offset": 0, 00:09:46.132 "data_size": 65536 00:09:46.132 } 00:09:46.132 ] 00:09:46.132 }' 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.132 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.700 [2024-11-26 22:54:25.668288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.700 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.701 "name": "Existed_Raid", 00:09:46.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.701 "strip_size_kb": 64, 00:09:46.701 "state": "configuring", 00:09:46.701 "raid_level": "raid0", 00:09:46.701 "superblock": false, 00:09:46.701 "num_base_bdevs": 4, 00:09:46.701 "num_base_bdevs_discovered": 2, 00:09:46.701 "num_base_bdevs_operational": 4, 00:09:46.701 "base_bdevs_list": [ 00:09:46.701 { 00:09:46.701 "name": null, 00:09:46.701 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:46.701 "is_configured": false, 00:09:46.701 "data_offset": 0, 00:09:46.701 "data_size": 65536 00:09:46.701 }, 00:09:46.701 { 00:09:46.701 "name": null, 00:09:46.701 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:46.701 "is_configured": false, 00:09:46.701 "data_offset": 0, 00:09:46.701 "data_size": 65536 00:09:46.701 }, 00:09:46.701 { 00:09:46.701 "name": "BaseBdev3", 00:09:46.701 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:46.701 "is_configured": true, 00:09:46.701 "data_offset": 0, 00:09:46.701 "data_size": 65536 00:09:46.701 }, 00:09:46.701 { 00:09:46.701 "name": "BaseBdev4", 00:09:46.701 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:46.701 "is_configured": true, 00:09:46.701 "data_offset": 0, 00:09:46.701 "data_size": 65536 00:09:46.701 } 00:09:46.701 ] 00:09:46.701 }' 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.701 22:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 [2024-11-26 22:54:26.163886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.270 "name": "Existed_Raid", 00:09:47.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.270 "strip_size_kb": 64, 00:09:47.270 "state": "configuring", 00:09:47.270 "raid_level": "raid0", 00:09:47.270 "superblock": false, 00:09:47.270 "num_base_bdevs": 4, 00:09:47.270 "num_base_bdevs_discovered": 3, 00:09:47.270 "num_base_bdevs_operational": 4, 00:09:47.270 "base_bdevs_list": [ 00:09:47.270 { 00:09:47.270 "name": null, 00:09:47.270 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:47.270 "is_configured": false, 00:09:47.270 "data_offset": 0, 00:09:47.270 "data_size": 65536 00:09:47.270 }, 00:09:47.270 { 00:09:47.270 "name": "BaseBdev2", 00:09:47.270 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:47.270 "is_configured": true, 00:09:47.270 "data_offset": 0, 00:09:47.270 "data_size": 65536 00:09:47.270 }, 00:09:47.270 { 00:09:47.270 "name": "BaseBdev3", 00:09:47.270 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:47.270 "is_configured": true, 00:09:47.270 "data_offset": 0, 00:09:47.270 "data_size": 65536 00:09:47.270 }, 00:09:47.270 { 00:09:47.270 "name": "BaseBdev4", 00:09:47.270 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:47.270 "is_configured": true, 00:09:47.270 "data_offset": 0, 00:09:47.270 "data_size": 65536 00:09:47.270 } 00:09:47.270 ] 00:09:47.270 }' 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.270 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.529 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f06cc3be-242b-426e-ac66-111dc9d5921b 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 [2024-11-26 22:54:26.684512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:47.789 [2024-11-26 22:54:26.684655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.789 [2024-11-26 22:54:26.684688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:47.789 [2024-11-26 22:54:26.685042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:09:47.789 [2024-11-26 22:54:26.685239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.789 [2024-11-26 22:54:26.685296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:47.789 [2024-11-26 22:54:26.685563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.789 NewBaseBdev 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 [ 00:09:47.789 { 00:09:47.789 "name": "NewBaseBdev", 00:09:47.789 "aliases": [ 00:09:47.789 "f06cc3be-242b-426e-ac66-111dc9d5921b" 00:09:47.789 ], 00:09:47.789 "product_name": "Malloc disk", 00:09:47.789 "block_size": 512, 00:09:47.789 "num_blocks": 65536, 00:09:47.789 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:47.789 "assigned_rate_limits": { 00:09:47.789 "rw_ios_per_sec": 0, 00:09:47.789 "rw_mbytes_per_sec": 0, 00:09:47.789 "r_mbytes_per_sec": 0, 00:09:47.789 "w_mbytes_per_sec": 0 00:09:47.789 }, 00:09:47.789 "claimed": true, 00:09:47.789 "claim_type": "exclusive_write", 00:09:47.789 "zoned": false, 00:09:47.789 "supported_io_types": { 00:09:47.789 "read": true, 00:09:47.789 "write": true, 00:09:47.789 "unmap": true, 00:09:47.789 "flush": true, 00:09:47.789 "reset": true, 00:09:47.789 "nvme_admin": false, 00:09:47.789 "nvme_io": false, 00:09:47.789 "nvme_io_md": false, 00:09:47.789 "write_zeroes": true, 00:09:47.789 "zcopy": true, 00:09:47.789 "get_zone_info": false, 00:09:47.789 "zone_management": false, 00:09:47.789 "zone_append": false, 00:09:47.789 "compare": false, 00:09:47.789 "compare_and_write": false, 00:09:47.789 "abort": true, 00:09:47.789 "seek_hole": false, 00:09:47.789 "seek_data": false, 00:09:47.789 "copy": true, 00:09:47.790 "nvme_iov_md": false 00:09:47.790 }, 00:09:47.790 "memory_domains": [ 00:09:47.790 { 00:09:47.790 "dma_device_id": "system", 00:09:47.790 "dma_device_type": 1 00:09:47.790 }, 00:09:47.790 { 00:09:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.790 "dma_device_type": 2 00:09:47.790 } 00:09:47.790 ], 00:09:47.790 "driver_specific": {} 00:09:47.790 } 00:09:47.790 ] 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.790 "name": "Existed_Raid", 00:09:47.790 "uuid": "e89ded93-de49-400b-80c9-a3da3e4ee6b1", 00:09:47.790 "strip_size_kb": 64, 00:09:47.790 "state": "online", 00:09:47.790 "raid_level": "raid0", 00:09:47.790 "superblock": false, 00:09:47.790 "num_base_bdevs": 4, 00:09:47.790 "num_base_bdevs_discovered": 4, 00:09:47.790 "num_base_bdevs_operational": 4, 00:09:47.790 "base_bdevs_list": [ 00:09:47.790 { 00:09:47.790 "name": "NewBaseBdev", 00:09:47.790 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:47.790 "is_configured": true, 00:09:47.790 "data_offset": 0, 00:09:47.790 "data_size": 65536 00:09:47.790 }, 00:09:47.790 { 00:09:47.790 "name": "BaseBdev2", 00:09:47.790 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:47.790 "is_configured": true, 00:09:47.790 "data_offset": 0, 00:09:47.790 "data_size": 65536 00:09:47.790 }, 00:09:47.790 { 00:09:47.790 "name": "BaseBdev3", 00:09:47.790 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:47.790 "is_configured": true, 00:09:47.790 "data_offset": 0, 00:09:47.790 "data_size": 65536 00:09:47.790 }, 00:09:47.790 { 00:09:47.790 "name": "BaseBdev4", 00:09:47.790 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:47.790 "is_configured": true, 00:09:47.790 "data_offset": 0, 00:09:47.790 "data_size": 65536 00:09:47.790 } 00:09:47.790 ] 00:09:47.790 }' 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.790 22:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.050 [2024-11-26 22:54:27.112928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.050 "name": "Existed_Raid", 00:09:48.050 "aliases": [ 00:09:48.050 "e89ded93-de49-400b-80c9-a3da3e4ee6b1" 00:09:48.050 ], 00:09:48.050 "product_name": "Raid Volume", 00:09:48.050 "block_size": 512, 00:09:48.050 "num_blocks": 262144, 00:09:48.050 "uuid": "e89ded93-de49-400b-80c9-a3da3e4ee6b1", 00:09:48.050 "assigned_rate_limits": { 00:09:48.050 "rw_ios_per_sec": 0, 00:09:48.050 "rw_mbytes_per_sec": 0, 00:09:48.050 "r_mbytes_per_sec": 0, 00:09:48.050 "w_mbytes_per_sec": 0 00:09:48.050 }, 00:09:48.050 "claimed": false, 00:09:48.050 "zoned": false, 00:09:48.050 "supported_io_types": { 00:09:48.050 "read": true, 00:09:48.050 "write": true, 00:09:48.050 "unmap": true, 00:09:48.050 "flush": true, 00:09:48.050 "reset": true, 00:09:48.050 "nvme_admin": false, 00:09:48.050 "nvme_io": false, 00:09:48.050 "nvme_io_md": false, 00:09:48.050 "write_zeroes": true, 00:09:48.050 "zcopy": false, 00:09:48.050 "get_zone_info": false, 00:09:48.050 "zone_management": false, 00:09:48.050 "zone_append": false, 00:09:48.050 "compare": false, 00:09:48.050 "compare_and_write": false, 00:09:48.050 "abort": false, 00:09:48.050 "seek_hole": false, 00:09:48.050 "seek_data": false, 00:09:48.050 "copy": false, 00:09:48.050 "nvme_iov_md": false 00:09:48.050 }, 00:09:48.050 "memory_domains": [ 00:09:48.050 { 00:09:48.050 "dma_device_id": "system", 00:09:48.050 "dma_device_type": 1 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.050 "dma_device_type": 2 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "system", 00:09:48.050 "dma_device_type": 1 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.050 "dma_device_type": 2 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "system", 00:09:48.050 "dma_device_type": 1 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.050 "dma_device_type": 2 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "system", 00:09:48.050 "dma_device_type": 1 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.050 "dma_device_type": 2 00:09:48.050 } 00:09:48.050 ], 00:09:48.050 "driver_specific": { 00:09:48.050 "raid": { 00:09:48.050 "uuid": "e89ded93-de49-400b-80c9-a3da3e4ee6b1", 00:09:48.050 "strip_size_kb": 64, 00:09:48.050 "state": "online", 00:09:48.050 "raid_level": "raid0", 00:09:48.050 "superblock": false, 00:09:48.050 "num_base_bdevs": 4, 00:09:48.050 "num_base_bdevs_discovered": 4, 00:09:48.050 "num_base_bdevs_operational": 4, 00:09:48.050 "base_bdevs_list": [ 00:09:48.050 { 00:09:48.050 "name": "NewBaseBdev", 00:09:48.050 "uuid": "f06cc3be-242b-426e-ac66-111dc9d5921b", 00:09:48.050 "is_configured": true, 00:09:48.050 "data_offset": 0, 00:09:48.050 "data_size": 65536 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "name": "BaseBdev2", 00:09:48.050 "uuid": "6b49c371-8a35-40d5-a1fb-36e51f8763c3", 00:09:48.050 "is_configured": true, 00:09:48.050 "data_offset": 0, 00:09:48.050 "data_size": 65536 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "name": "BaseBdev3", 00:09:48.050 "uuid": "81183dda-8ed6-472e-9db1-222170f6cba4", 00:09:48.050 "is_configured": true, 00:09:48.050 "data_offset": 0, 00:09:48.050 "data_size": 65536 00:09:48.050 }, 00:09:48.050 { 00:09:48.050 "name": "BaseBdev4", 00:09:48.050 "uuid": "58644cc8-1a21-47d8-9ab7-96f00c55ca04", 00:09:48.050 "is_configured": true, 00:09:48.050 "data_offset": 0, 00:09:48.050 "data_size": 65536 00:09:48.050 } 00:09:48.050 ] 00:09:48.050 } 00:09:48.050 } 00:09:48.050 }' 00:09:48.050 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:48.309 BaseBdev2 00:09:48.309 BaseBdev3 00:09:48.309 BaseBdev4' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.309 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.568 [2024-11-26 22:54:27.452738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.568 [2024-11-26 22:54:27.452765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.568 [2024-11-26 22:54:27.452844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.568 [2024-11-26 22:54:27.452914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.568 [2024-11-26 22:54:27.452933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81952 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81952 ']' 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81952 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81952 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81952' 00:09:48.568 killing process with pid 81952 00:09:48.568 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 81952 00:09:48.569 [2024-11-26 22:54:27.524627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.569 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 81952 00:09:48.569 [2024-11-26 22:54:27.600778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.828 ************************************ 00:09:48.828 END TEST raid_state_function_test 00:09:48.828 ************************************ 00:09:48.828 22:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:48.828 00:09:48.828 real 0m9.719s 00:09:48.828 user 0m16.314s 00:09:48.828 sys 0m2.122s 00:09:48.828 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.828 22:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 22:54:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:49.088 22:54:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.088 22:54:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.088 22:54:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 ************************************ 00:09:49.088 START TEST raid_state_function_test_sb 00:09:49.088 ************************************ 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82607 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.088 Process raid pid: 82607 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82607' 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82607 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82607 ']' 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.088 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.089 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.089 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.089 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.089 [2024-11-26 22:54:28.118671] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:49.089 [2024-11-26 22:54:28.118889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.348 [2024-11-26 22:54:28.259662] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:49.348 [2024-11-26 22:54:28.296512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.348 [2024-11-26 22:54:28.335318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.348 [2024-11-26 22:54:28.411321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.348 [2024-11-26 22:54:28.411368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.918 [2024-11-26 22:54:28.942408] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.918 [2024-11-26 22:54:28.942471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.918 [2024-11-26 22:54:28.942486] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.918 [2024-11-26 22:54:28.942495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.918 [2024-11-26 22:54:28.942509] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.918 [2024-11-26 22:54:28.942519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.918 [2024-11-26 22:54:28.942529] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:49.918 [2024-11-26 22:54:28.942537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.918 22:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.918 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.918 "name": "Existed_Raid", 00:09:49.918 "uuid": "ae1e55c9-bd85-4e4b-98d0-396ddc83044d", 00:09:49.918 "strip_size_kb": 64, 00:09:49.918 "state": "configuring", 00:09:49.918 "raid_level": "raid0", 00:09:49.918 "superblock": true, 00:09:49.918 "num_base_bdevs": 4, 00:09:49.918 "num_base_bdevs_discovered": 0, 00:09:49.918 "num_base_bdevs_operational": 4, 00:09:49.918 "base_bdevs_list": [ 00:09:49.918 { 00:09:49.918 "name": "BaseBdev1", 00:09:49.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.918 "is_configured": false, 00:09:49.918 "data_offset": 0, 00:09:49.918 "data_size": 0 00:09:49.918 }, 00:09:49.918 { 00:09:49.918 "name": "BaseBdev2", 00:09:49.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.918 "is_configured": false, 00:09:49.918 "data_offset": 0, 00:09:49.918 "data_size": 0 00:09:49.918 }, 00:09:49.918 { 00:09:49.918 "name": "BaseBdev3", 00:09:49.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.918 "is_configured": false, 00:09:49.918 "data_offset": 0, 00:09:49.918 "data_size": 0 00:09:49.918 }, 00:09:49.918 { 00:09:49.918 "name": "BaseBdev4", 00:09:49.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.918 "is_configured": false, 00:09:49.918 "data_offset": 0, 00:09:49.918 "data_size": 0 00:09:49.918 } 00:09:49.918 ] 00:09:49.918 }' 00:09:49.918 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.918 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 [2024-11-26 22:54:29.362329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.487 [2024-11-26 22:54:29.362370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 [2024-11-26 22:54:29.374390] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.487 [2024-11-26 22:54:29.374430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.487 [2024-11-26 22:54:29.374442] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.487 [2024-11-26 22:54:29.374452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.487 [2024-11-26 22:54:29.374462] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.487 [2024-11-26 22:54:29.374471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.487 [2024-11-26 22:54:29.374480] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:50.487 [2024-11-26 22:54:29.374489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 [2024-11-26 22:54:29.401399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.487 BaseBdev1 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.487 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 [ 00:09:50.487 { 00:09:50.487 "name": "BaseBdev1", 00:09:50.487 "aliases": [ 00:09:50.487 "f041e28b-4713-4afe-b4ef-fe73f845a309" 00:09:50.487 ], 00:09:50.487 "product_name": "Malloc disk", 00:09:50.487 "block_size": 512, 00:09:50.487 "num_blocks": 65536, 00:09:50.487 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:50.487 "assigned_rate_limits": { 00:09:50.487 "rw_ios_per_sec": 0, 00:09:50.487 "rw_mbytes_per_sec": 0, 00:09:50.487 "r_mbytes_per_sec": 0, 00:09:50.487 "w_mbytes_per_sec": 0 00:09:50.487 }, 00:09:50.487 "claimed": true, 00:09:50.487 "claim_type": "exclusive_write", 00:09:50.487 "zoned": false, 00:09:50.487 "supported_io_types": { 00:09:50.487 "read": true, 00:09:50.487 "write": true, 00:09:50.487 "unmap": true, 00:09:50.487 "flush": true, 00:09:50.487 "reset": true, 00:09:50.487 "nvme_admin": false, 00:09:50.487 "nvme_io": false, 00:09:50.487 "nvme_io_md": false, 00:09:50.487 "write_zeroes": true, 00:09:50.487 "zcopy": true, 00:09:50.487 "get_zone_info": false, 00:09:50.487 "zone_management": false, 00:09:50.487 "zone_append": false, 00:09:50.487 "compare": false, 00:09:50.487 "compare_and_write": false, 00:09:50.487 "abort": true, 00:09:50.487 "seek_hole": false, 00:09:50.487 "seek_data": false, 00:09:50.487 "copy": true, 00:09:50.487 "nvme_iov_md": false 00:09:50.487 }, 00:09:50.487 "memory_domains": [ 00:09:50.487 { 00:09:50.487 "dma_device_id": "system", 00:09:50.487 "dma_device_type": 1 00:09:50.487 }, 00:09:50.487 { 00:09:50.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.487 "dma_device_type": 2 00:09:50.487 } 00:09:50.487 ], 00:09:50.487 "driver_specific": {} 00:09:50.487 } 00:09:50.487 ] 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.488 "name": "Existed_Raid", 00:09:50.488 "uuid": "5a1f4440-935c-45ae-a1ed-10743a798e71", 00:09:50.488 "strip_size_kb": 64, 00:09:50.488 "state": "configuring", 00:09:50.488 "raid_level": "raid0", 00:09:50.488 "superblock": true, 00:09:50.488 "num_base_bdevs": 4, 00:09:50.488 "num_base_bdevs_discovered": 1, 00:09:50.488 "num_base_bdevs_operational": 4, 00:09:50.488 "base_bdevs_list": [ 00:09:50.488 { 00:09:50.488 "name": "BaseBdev1", 00:09:50.488 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:50.488 "is_configured": true, 00:09:50.488 "data_offset": 2048, 00:09:50.488 "data_size": 63488 00:09:50.488 }, 00:09:50.488 { 00:09:50.488 "name": "BaseBdev2", 00:09:50.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.488 "is_configured": false, 00:09:50.488 "data_offset": 0, 00:09:50.488 "data_size": 0 00:09:50.488 }, 00:09:50.488 { 00:09:50.488 "name": "BaseBdev3", 00:09:50.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.488 "is_configured": false, 00:09:50.488 "data_offset": 0, 00:09:50.488 "data_size": 0 00:09:50.488 }, 00:09:50.488 { 00:09:50.488 "name": "BaseBdev4", 00:09:50.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.488 "is_configured": false, 00:09:50.488 "data_offset": 0, 00:09:50.488 "data_size": 0 00:09:50.488 } 00:09:50.488 ] 00:09:50.488 }' 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.488 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.747 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.747 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.747 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.007 [2024-11-26 22:54:29.877547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.007 [2024-11-26 22:54:29.877617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.007 [2024-11-26 22:54:29.889606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.007 [2024-11-26 22:54:29.891755] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.007 [2024-11-26 22:54:29.891796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.007 [2024-11-26 22:54:29.891810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.007 [2024-11-26 22:54:29.891819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.007 [2024-11-26 22:54:29.891829] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:51.007 [2024-11-26 22:54:29.891838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.007 "name": "Existed_Raid", 00:09:51.007 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:51.007 "strip_size_kb": 64, 00:09:51.007 "state": "configuring", 00:09:51.007 "raid_level": "raid0", 00:09:51.007 "superblock": true, 00:09:51.007 "num_base_bdevs": 4, 00:09:51.007 "num_base_bdevs_discovered": 1, 00:09:51.007 "num_base_bdevs_operational": 4, 00:09:51.007 "base_bdevs_list": [ 00:09:51.007 { 00:09:51.007 "name": "BaseBdev1", 00:09:51.007 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:51.007 "is_configured": true, 00:09:51.007 "data_offset": 2048, 00:09:51.007 "data_size": 63488 00:09:51.007 }, 00:09:51.007 { 00:09:51.007 "name": "BaseBdev2", 00:09:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.007 "is_configured": false, 00:09:51.007 "data_offset": 0, 00:09:51.007 "data_size": 0 00:09:51.007 }, 00:09:51.007 { 00:09:51.007 "name": "BaseBdev3", 00:09:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.007 "is_configured": false, 00:09:51.007 "data_offset": 0, 00:09:51.007 "data_size": 0 00:09:51.007 }, 00:09:51.007 { 00:09:51.007 "name": "BaseBdev4", 00:09:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.007 "is_configured": false, 00:09:51.007 "data_offset": 0, 00:09:51.007 "data_size": 0 00:09:51.007 } 00:09:51.007 ] 00:09:51.007 }' 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.007 22:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 [2024-11-26 22:54:30.318493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.267 BaseBdev2 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 [ 00:09:51.267 { 00:09:51.267 "name": "BaseBdev2", 00:09:51.267 "aliases": [ 00:09:51.267 "354ddd7e-22e9-4f5e-af18-935e682dd581" 00:09:51.267 ], 00:09:51.267 "product_name": "Malloc disk", 00:09:51.267 "block_size": 512, 00:09:51.267 "num_blocks": 65536, 00:09:51.267 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:51.267 "assigned_rate_limits": { 00:09:51.267 "rw_ios_per_sec": 0, 00:09:51.267 "rw_mbytes_per_sec": 0, 00:09:51.267 "r_mbytes_per_sec": 0, 00:09:51.267 "w_mbytes_per_sec": 0 00:09:51.267 }, 00:09:51.267 "claimed": true, 00:09:51.267 "claim_type": "exclusive_write", 00:09:51.267 "zoned": false, 00:09:51.267 "supported_io_types": { 00:09:51.267 "read": true, 00:09:51.267 "write": true, 00:09:51.267 "unmap": true, 00:09:51.267 "flush": true, 00:09:51.267 "reset": true, 00:09:51.267 "nvme_admin": false, 00:09:51.267 "nvme_io": false, 00:09:51.267 "nvme_io_md": false, 00:09:51.267 "write_zeroes": true, 00:09:51.267 "zcopy": true, 00:09:51.267 "get_zone_info": false, 00:09:51.267 "zone_management": false, 00:09:51.267 "zone_append": false, 00:09:51.267 "compare": false, 00:09:51.267 "compare_and_write": false, 00:09:51.267 "abort": true, 00:09:51.267 "seek_hole": false, 00:09:51.267 "seek_data": false, 00:09:51.267 "copy": true, 00:09:51.267 "nvme_iov_md": false 00:09:51.267 }, 00:09:51.267 "memory_domains": [ 00:09:51.267 { 00:09:51.267 "dma_device_id": "system", 00:09:51.267 "dma_device_type": 1 00:09:51.267 }, 00:09:51.267 { 00:09:51.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.267 "dma_device_type": 2 00:09:51.267 } 00:09:51.267 ], 00:09:51.267 "driver_specific": {} 00:09:51.267 } 00:09:51.267 ] 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.527 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.527 "name": "Existed_Raid", 00:09:51.527 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:51.527 "strip_size_kb": 64, 00:09:51.527 "state": "configuring", 00:09:51.527 "raid_level": "raid0", 00:09:51.527 "superblock": true, 00:09:51.527 "num_base_bdevs": 4, 00:09:51.527 "num_base_bdevs_discovered": 2, 00:09:51.527 "num_base_bdevs_operational": 4, 00:09:51.527 "base_bdevs_list": [ 00:09:51.527 { 00:09:51.527 "name": "BaseBdev1", 00:09:51.527 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:51.527 "is_configured": true, 00:09:51.527 "data_offset": 2048, 00:09:51.527 "data_size": 63488 00:09:51.527 }, 00:09:51.527 { 00:09:51.527 "name": "BaseBdev2", 00:09:51.527 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:51.527 "is_configured": true, 00:09:51.527 "data_offset": 2048, 00:09:51.527 "data_size": 63488 00:09:51.527 }, 00:09:51.527 { 00:09:51.527 "name": "BaseBdev3", 00:09:51.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.527 "is_configured": false, 00:09:51.527 "data_offset": 0, 00:09:51.527 "data_size": 0 00:09:51.527 }, 00:09:51.527 { 00:09:51.527 "name": "BaseBdev4", 00:09:51.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.527 "is_configured": false, 00:09:51.527 "data_offset": 0, 00:09:51.527 "data_size": 0 00:09:51.527 } 00:09:51.527 ] 00:09:51.527 }' 00:09:51.527 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.527 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 [2024-11-26 22:54:30.805328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.787 BaseBdev3 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 [ 00:09:51.787 { 00:09:51.787 "name": "BaseBdev3", 00:09:51.787 "aliases": [ 00:09:51.787 "4248850d-b332-4442-83a6-f875228ceccb" 00:09:51.787 ], 00:09:51.787 "product_name": "Malloc disk", 00:09:51.787 "block_size": 512, 00:09:51.787 "num_blocks": 65536, 00:09:51.787 "uuid": "4248850d-b332-4442-83a6-f875228ceccb", 00:09:51.787 "assigned_rate_limits": { 00:09:51.787 "rw_ios_per_sec": 0, 00:09:51.787 "rw_mbytes_per_sec": 0, 00:09:51.787 "r_mbytes_per_sec": 0, 00:09:51.787 "w_mbytes_per_sec": 0 00:09:51.787 }, 00:09:51.787 "claimed": true, 00:09:51.787 "claim_type": "exclusive_write", 00:09:51.787 "zoned": false, 00:09:51.787 "supported_io_types": { 00:09:51.787 "read": true, 00:09:51.787 "write": true, 00:09:51.787 "unmap": true, 00:09:51.787 "flush": true, 00:09:51.787 "reset": true, 00:09:51.787 "nvme_admin": false, 00:09:51.787 "nvme_io": false, 00:09:51.787 "nvme_io_md": false, 00:09:51.787 "write_zeroes": true, 00:09:51.787 "zcopy": true, 00:09:51.787 "get_zone_info": false, 00:09:51.787 "zone_management": false, 00:09:51.787 "zone_append": false, 00:09:51.787 "compare": false, 00:09:51.787 "compare_and_write": false, 00:09:51.787 "abort": true, 00:09:51.787 "seek_hole": false, 00:09:51.787 "seek_data": false, 00:09:51.787 "copy": true, 00:09:51.787 "nvme_iov_md": false 00:09:51.787 }, 00:09:51.787 "memory_domains": [ 00:09:51.787 { 00:09:51.787 "dma_device_id": "system", 00:09:51.787 "dma_device_type": 1 00:09:51.787 }, 00:09:51.787 { 00:09:51.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.787 "dma_device_type": 2 00:09:51.787 } 00:09:51.787 ], 00:09:51.787 "driver_specific": {} 00:09:51.787 } 00:09:51.787 ] 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.787 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.788 "name": "Existed_Raid", 00:09:51.788 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:51.788 "strip_size_kb": 64, 00:09:51.788 "state": "configuring", 00:09:51.788 "raid_level": "raid0", 00:09:51.788 "superblock": true, 00:09:51.788 "num_base_bdevs": 4, 00:09:51.788 "num_base_bdevs_discovered": 3, 00:09:51.788 "num_base_bdevs_operational": 4, 00:09:51.788 "base_bdevs_list": [ 00:09:51.788 { 00:09:51.788 "name": "BaseBdev1", 00:09:51.788 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:51.788 "is_configured": true, 00:09:51.788 "data_offset": 2048, 00:09:51.788 "data_size": 63488 00:09:51.788 }, 00:09:51.788 { 00:09:51.788 "name": "BaseBdev2", 00:09:51.788 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:51.788 "is_configured": true, 00:09:51.788 "data_offset": 2048, 00:09:51.788 "data_size": 63488 00:09:51.788 }, 00:09:51.788 { 00:09:51.788 "name": "BaseBdev3", 00:09:51.788 "uuid": "4248850d-b332-4442-83a6-f875228ceccb", 00:09:51.788 "is_configured": true, 00:09:51.788 "data_offset": 2048, 00:09:51.788 "data_size": 63488 00:09:51.788 }, 00:09:51.788 { 00:09:51.788 "name": "BaseBdev4", 00:09:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.788 "is_configured": false, 00:09:51.788 "data_offset": 0, 00:09:51.788 "data_size": 0 00:09:51.788 } 00:09:51.788 ] 00:09:51.788 }' 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.788 22:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.358 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:52.358 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 [2024-11-26 22:54:31.282160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.359 [2024-11-26 22:54:31.282427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:52.359 [2024-11-26 22:54:31.282455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.359 [2024-11-26 22:54:31.282802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:52.359 BaseBdev4 00:09:52.359 [2024-11-26 22:54:31.282971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:52.359 [2024-11-26 22:54:31.282983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:52.359 [2024-11-26 22:54:31.283144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 [ 00:09:52.359 { 00:09:52.359 "name": "BaseBdev4", 00:09:52.359 "aliases": [ 00:09:52.359 "11e270a2-7696-4e6c-9d30-30236f7af8dd" 00:09:52.359 ], 00:09:52.359 "product_name": "Malloc disk", 00:09:52.359 "block_size": 512, 00:09:52.359 "num_blocks": 65536, 00:09:52.359 "uuid": "11e270a2-7696-4e6c-9d30-30236f7af8dd", 00:09:52.359 "assigned_rate_limits": { 00:09:52.359 "rw_ios_per_sec": 0, 00:09:52.359 "rw_mbytes_per_sec": 0, 00:09:52.359 "r_mbytes_per_sec": 0, 00:09:52.359 "w_mbytes_per_sec": 0 00:09:52.359 }, 00:09:52.359 "claimed": true, 00:09:52.359 "claim_type": "exclusive_write", 00:09:52.359 "zoned": false, 00:09:52.359 "supported_io_types": { 00:09:52.359 "read": true, 00:09:52.359 "write": true, 00:09:52.359 "unmap": true, 00:09:52.359 "flush": true, 00:09:52.359 "reset": true, 00:09:52.359 "nvme_admin": false, 00:09:52.359 "nvme_io": false, 00:09:52.359 "nvme_io_md": false, 00:09:52.359 "write_zeroes": true, 00:09:52.359 "zcopy": true, 00:09:52.359 "get_zone_info": false, 00:09:52.359 "zone_management": false, 00:09:52.359 "zone_append": false, 00:09:52.359 "compare": false, 00:09:52.359 "compare_and_write": false, 00:09:52.359 "abort": true, 00:09:52.359 "seek_hole": false, 00:09:52.359 "seek_data": false, 00:09:52.359 "copy": true, 00:09:52.359 "nvme_iov_md": false 00:09:52.359 }, 00:09:52.359 "memory_domains": [ 00:09:52.359 { 00:09:52.359 "dma_device_id": "system", 00:09:52.359 "dma_device_type": 1 00:09:52.359 }, 00:09:52.359 { 00:09:52.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.359 "dma_device_type": 2 00:09:52.359 } 00:09:52.359 ], 00:09:52.359 "driver_specific": {} 00:09:52.359 } 00:09:52.359 ] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.359 "name": "Existed_Raid", 00:09:52.359 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:52.359 "strip_size_kb": 64, 00:09:52.359 "state": "online", 00:09:52.359 "raid_level": "raid0", 00:09:52.359 "superblock": true, 00:09:52.359 "num_base_bdevs": 4, 00:09:52.359 "num_base_bdevs_discovered": 4, 00:09:52.359 "num_base_bdevs_operational": 4, 00:09:52.359 "base_bdevs_list": [ 00:09:52.359 { 00:09:52.359 "name": "BaseBdev1", 00:09:52.359 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:52.359 "is_configured": true, 00:09:52.359 "data_offset": 2048, 00:09:52.359 "data_size": 63488 00:09:52.359 }, 00:09:52.359 { 00:09:52.359 "name": "BaseBdev2", 00:09:52.359 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:52.359 "is_configured": true, 00:09:52.359 "data_offset": 2048, 00:09:52.359 "data_size": 63488 00:09:52.359 }, 00:09:52.359 { 00:09:52.359 "name": "BaseBdev3", 00:09:52.359 "uuid": "4248850d-b332-4442-83a6-f875228ceccb", 00:09:52.359 "is_configured": true, 00:09:52.359 "data_offset": 2048, 00:09:52.359 "data_size": 63488 00:09:52.359 }, 00:09:52.359 { 00:09:52.359 "name": "BaseBdev4", 00:09:52.359 "uuid": "11e270a2-7696-4e6c-9d30-30236f7af8dd", 00:09:52.359 "is_configured": true, 00:09:52.359 "data_offset": 2048, 00:09:52.359 "data_size": 63488 00:09:52.359 } 00:09:52.359 ] 00:09:52.359 }' 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.359 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.618 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.618 [2024-11-26 22:54:31.726621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.879 "name": "Existed_Raid", 00:09:52.879 "aliases": [ 00:09:52.879 "797dcadf-c22b-4dd3-a3ac-8af9db519e85" 00:09:52.879 ], 00:09:52.879 "product_name": "Raid Volume", 00:09:52.879 "block_size": 512, 00:09:52.879 "num_blocks": 253952, 00:09:52.879 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:52.879 "assigned_rate_limits": { 00:09:52.879 "rw_ios_per_sec": 0, 00:09:52.879 "rw_mbytes_per_sec": 0, 00:09:52.879 "r_mbytes_per_sec": 0, 00:09:52.879 "w_mbytes_per_sec": 0 00:09:52.879 }, 00:09:52.879 "claimed": false, 00:09:52.879 "zoned": false, 00:09:52.879 "supported_io_types": { 00:09:52.879 "read": true, 00:09:52.879 "write": true, 00:09:52.879 "unmap": true, 00:09:52.879 "flush": true, 00:09:52.879 "reset": true, 00:09:52.879 "nvme_admin": false, 00:09:52.879 "nvme_io": false, 00:09:52.879 "nvme_io_md": false, 00:09:52.879 "write_zeroes": true, 00:09:52.879 "zcopy": false, 00:09:52.879 "get_zone_info": false, 00:09:52.879 "zone_management": false, 00:09:52.879 "zone_append": false, 00:09:52.879 "compare": false, 00:09:52.879 "compare_and_write": false, 00:09:52.879 "abort": false, 00:09:52.879 "seek_hole": false, 00:09:52.879 "seek_data": false, 00:09:52.879 "copy": false, 00:09:52.879 "nvme_iov_md": false 00:09:52.879 }, 00:09:52.879 "memory_domains": [ 00:09:52.879 { 00:09:52.879 "dma_device_id": "system", 00:09:52.879 "dma_device_type": 1 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.879 "dma_device_type": 2 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "system", 00:09:52.879 "dma_device_type": 1 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.879 "dma_device_type": 2 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "system", 00:09:52.879 "dma_device_type": 1 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.879 "dma_device_type": 2 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "system", 00:09:52.879 "dma_device_type": 1 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.879 "dma_device_type": 2 00:09:52.879 } 00:09:52.879 ], 00:09:52.879 "driver_specific": { 00:09:52.879 "raid": { 00:09:52.879 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:52.879 "strip_size_kb": 64, 00:09:52.879 "state": "online", 00:09:52.879 "raid_level": "raid0", 00:09:52.879 "superblock": true, 00:09:52.879 "num_base_bdevs": 4, 00:09:52.879 "num_base_bdevs_discovered": 4, 00:09:52.879 "num_base_bdevs_operational": 4, 00:09:52.879 "base_bdevs_list": [ 00:09:52.879 { 00:09:52.879 "name": "BaseBdev1", 00:09:52.879 "uuid": "f041e28b-4713-4afe-b4ef-fe73f845a309", 00:09:52.879 "is_configured": true, 00:09:52.879 "data_offset": 2048, 00:09:52.879 "data_size": 63488 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "name": "BaseBdev2", 00:09:52.879 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:52.879 "is_configured": true, 00:09:52.879 "data_offset": 2048, 00:09:52.879 "data_size": 63488 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "name": "BaseBdev3", 00:09:52.879 "uuid": "4248850d-b332-4442-83a6-f875228ceccb", 00:09:52.879 "is_configured": true, 00:09:52.879 "data_offset": 2048, 00:09:52.879 "data_size": 63488 00:09:52.879 }, 00:09:52.879 { 00:09:52.879 "name": "BaseBdev4", 00:09:52.879 "uuid": "11e270a2-7696-4e6c-9d30-30236f7af8dd", 00:09:52.879 "is_configured": true, 00:09:52.879 "data_offset": 2048, 00:09:52.879 "data_size": 63488 00:09:52.879 } 00:09:52.879 ] 00:09:52.879 } 00:09:52.879 } 00:09:52.879 }' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:52.879 BaseBdev2 00:09:52.879 BaseBdev3 00:09:52.879 BaseBdev4' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.879 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.880 22:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.141 [2024-11-26 22:54:32.006494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.141 [2024-11-26 22:54:32.006528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.141 [2024-11-26 22:54:32.006610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.141 "name": "Existed_Raid", 00:09:53.141 "uuid": "797dcadf-c22b-4dd3-a3ac-8af9db519e85", 00:09:53.141 "strip_size_kb": 64, 00:09:53.141 "state": "offline", 00:09:53.141 "raid_level": "raid0", 00:09:53.141 "superblock": true, 00:09:53.141 "num_base_bdevs": 4, 00:09:53.141 "num_base_bdevs_discovered": 3, 00:09:53.141 "num_base_bdevs_operational": 3, 00:09:53.141 "base_bdevs_list": [ 00:09:53.141 { 00:09:53.141 "name": null, 00:09:53.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.141 "is_configured": false, 00:09:53.141 "data_offset": 0, 00:09:53.141 "data_size": 63488 00:09:53.141 }, 00:09:53.141 { 00:09:53.141 "name": "BaseBdev2", 00:09:53.141 "uuid": "354ddd7e-22e9-4f5e-af18-935e682dd581", 00:09:53.141 "is_configured": true, 00:09:53.141 "data_offset": 2048, 00:09:53.141 "data_size": 63488 00:09:53.141 }, 00:09:53.141 { 00:09:53.141 "name": "BaseBdev3", 00:09:53.141 "uuid": "4248850d-b332-4442-83a6-f875228ceccb", 00:09:53.141 "is_configured": true, 00:09:53.141 "data_offset": 2048, 00:09:53.141 "data_size": 63488 00:09:53.141 }, 00:09:53.141 { 00:09:53.141 "name": "BaseBdev4", 00:09:53.141 "uuid": "11e270a2-7696-4e6c-9d30-30236f7af8dd", 00:09:53.141 "is_configured": true, 00:09:53.141 "data_offset": 2048, 00:09:53.141 "data_size": 63488 00:09:53.141 } 00:09:53.141 ] 00:09:53.141 }' 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.141 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.400 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.400 [2024-11-26 22:54:32.519267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.659 [2024-11-26 22:54:32.584080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.659 [2024-11-26 22:54:32.648654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:53.659 [2024-11-26 22:54:32.648722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.659 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.660 BaseBdev2 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.660 [ 00:09:53.660 { 00:09:53.660 "name": "BaseBdev2", 00:09:53.660 "aliases": [ 00:09:53.660 "31627237-e556-4d73-97f3-6afde847c79d" 00:09:53.660 ], 00:09:53.660 "product_name": "Malloc disk", 00:09:53.660 "block_size": 512, 00:09:53.660 "num_blocks": 65536, 00:09:53.660 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:53.660 "assigned_rate_limits": { 00:09:53.660 "rw_ios_per_sec": 0, 00:09:53.660 "rw_mbytes_per_sec": 0, 00:09:53.660 "r_mbytes_per_sec": 0, 00:09:53.660 "w_mbytes_per_sec": 0 00:09:53.660 }, 00:09:53.660 "claimed": false, 00:09:53.660 "zoned": false, 00:09:53.660 "supported_io_types": { 00:09:53.660 "read": true, 00:09:53.660 "write": true, 00:09:53.660 "unmap": true, 00:09:53.660 "flush": true, 00:09:53.660 "reset": true, 00:09:53.660 "nvme_admin": false, 00:09:53.660 "nvme_io": false, 00:09:53.660 "nvme_io_md": false, 00:09:53.660 "write_zeroes": true, 00:09:53.660 "zcopy": true, 00:09:53.660 "get_zone_info": false, 00:09:53.660 "zone_management": false, 00:09:53.660 "zone_append": false, 00:09:53.660 "compare": false, 00:09:53.660 "compare_and_write": false, 00:09:53.660 "abort": true, 00:09:53.660 "seek_hole": false, 00:09:53.660 "seek_data": false, 00:09:53.660 "copy": true, 00:09:53.660 "nvme_iov_md": false 00:09:53.660 }, 00:09:53.660 "memory_domains": [ 00:09:53.660 { 00:09:53.660 "dma_device_id": "system", 00:09:53.660 "dma_device_type": 1 00:09:53.660 }, 00:09:53.660 { 00:09:53.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.660 "dma_device_type": 2 00:09:53.660 } 00:09:53.660 ], 00:09:53.660 "driver_specific": {} 00:09:53.660 } 00:09:53.660 ] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.660 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.920 BaseBdev3 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.920 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.920 [ 00:09:53.920 { 00:09:53.920 "name": "BaseBdev3", 00:09:53.920 "aliases": [ 00:09:53.920 "e83fdd9a-a47c-4b51-90f7-2b772234f584" 00:09:53.920 ], 00:09:53.920 "product_name": "Malloc disk", 00:09:53.920 "block_size": 512, 00:09:53.920 "num_blocks": 65536, 00:09:53.920 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:53.920 "assigned_rate_limits": { 00:09:53.920 "rw_ios_per_sec": 0, 00:09:53.920 "rw_mbytes_per_sec": 0, 00:09:53.920 "r_mbytes_per_sec": 0, 00:09:53.920 "w_mbytes_per_sec": 0 00:09:53.920 }, 00:09:53.920 "claimed": false, 00:09:53.920 "zoned": false, 00:09:53.920 "supported_io_types": { 00:09:53.920 "read": true, 00:09:53.920 "write": true, 00:09:53.920 "unmap": true, 00:09:53.920 "flush": true, 00:09:53.921 "reset": true, 00:09:53.921 "nvme_admin": false, 00:09:53.921 "nvme_io": false, 00:09:53.921 "nvme_io_md": false, 00:09:53.921 "write_zeroes": true, 00:09:53.921 "zcopy": true, 00:09:53.921 "get_zone_info": false, 00:09:53.921 "zone_management": false, 00:09:53.921 "zone_append": false, 00:09:53.921 "compare": false, 00:09:53.921 "compare_and_write": false, 00:09:53.921 "abort": true, 00:09:53.921 "seek_hole": false, 00:09:53.921 "seek_data": false, 00:09:53.921 "copy": true, 00:09:53.921 "nvme_iov_md": false 00:09:53.921 }, 00:09:53.921 "memory_domains": [ 00:09:53.921 { 00:09:53.921 "dma_device_id": "system", 00:09:53.921 "dma_device_type": 1 00:09:53.921 }, 00:09:53.921 { 00:09:53.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.921 "dma_device_type": 2 00:09:53.921 } 00:09:53.921 ], 00:09:53.921 "driver_specific": {} 00:09:53.921 } 00:09:53.921 ] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 BaseBdev4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 [ 00:09:53.921 { 00:09:53.921 "name": "BaseBdev4", 00:09:53.921 "aliases": [ 00:09:53.921 "94b17eeb-698e-4a31-b019-c98af984320b" 00:09:53.921 ], 00:09:53.921 "product_name": "Malloc disk", 00:09:53.921 "block_size": 512, 00:09:53.921 "num_blocks": 65536, 00:09:53.921 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:53.921 "assigned_rate_limits": { 00:09:53.921 "rw_ios_per_sec": 0, 00:09:53.921 "rw_mbytes_per_sec": 0, 00:09:53.921 "r_mbytes_per_sec": 0, 00:09:53.921 "w_mbytes_per_sec": 0 00:09:53.921 }, 00:09:53.921 "claimed": false, 00:09:53.921 "zoned": false, 00:09:53.921 "supported_io_types": { 00:09:53.921 "read": true, 00:09:53.921 "write": true, 00:09:53.921 "unmap": true, 00:09:53.921 "flush": true, 00:09:53.921 "reset": true, 00:09:53.921 "nvme_admin": false, 00:09:53.921 "nvme_io": false, 00:09:53.921 "nvme_io_md": false, 00:09:53.921 "write_zeroes": true, 00:09:53.921 "zcopy": true, 00:09:53.921 "get_zone_info": false, 00:09:53.921 "zone_management": false, 00:09:53.921 "zone_append": false, 00:09:53.921 "compare": false, 00:09:53.921 "compare_and_write": false, 00:09:53.921 "abort": true, 00:09:53.921 "seek_hole": false, 00:09:53.921 "seek_data": false, 00:09:53.921 "copy": true, 00:09:53.921 "nvme_iov_md": false 00:09:53.921 }, 00:09:53.921 "memory_domains": [ 00:09:53.921 { 00:09:53.921 "dma_device_id": "system", 00:09:53.921 "dma_device_type": 1 00:09:53.921 }, 00:09:53.921 { 00:09:53.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.921 "dma_device_type": 2 00:09:53.921 } 00:09:53.921 ], 00:09:53.921 "driver_specific": {} 00:09:53.921 } 00:09:53.921 ] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 [2024-11-26 22:54:32.884768] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.921 [2024-11-26 22:54:32.884826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.921 [2024-11-26 22:54:32.884850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.921 [2024-11-26 22:54:32.887002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.921 [2024-11-26 22:54:32.887069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.921 "name": "Existed_Raid", 00:09:53.921 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:53.921 "strip_size_kb": 64, 00:09:53.921 "state": "configuring", 00:09:53.921 "raid_level": "raid0", 00:09:53.921 "superblock": true, 00:09:53.921 "num_base_bdevs": 4, 00:09:53.921 "num_base_bdevs_discovered": 3, 00:09:53.921 "num_base_bdevs_operational": 4, 00:09:53.921 "base_bdevs_list": [ 00:09:53.921 { 00:09:53.921 "name": "BaseBdev1", 00:09:53.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.921 "is_configured": false, 00:09:53.921 "data_offset": 0, 00:09:53.921 "data_size": 0 00:09:53.921 }, 00:09:53.921 { 00:09:53.921 "name": "BaseBdev2", 00:09:53.921 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:53.921 "is_configured": true, 00:09:53.921 "data_offset": 2048, 00:09:53.921 "data_size": 63488 00:09:53.921 }, 00:09:53.921 { 00:09:53.921 "name": "BaseBdev3", 00:09:53.921 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:53.921 "is_configured": true, 00:09:53.921 "data_offset": 2048, 00:09:53.921 "data_size": 63488 00:09:53.921 }, 00:09:53.921 { 00:09:53.921 "name": "BaseBdev4", 00:09:53.921 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:53.921 "is_configured": true, 00:09:53.921 "data_offset": 2048, 00:09:53.921 "data_size": 63488 00:09:53.921 } 00:09:53.921 ] 00:09:53.921 }' 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.921 22:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.181 [2024-11-26 22:54:33.300823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.181 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.441 "name": "Existed_Raid", 00:09:54.441 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:54.441 "strip_size_kb": 64, 00:09:54.441 "state": "configuring", 00:09:54.441 "raid_level": "raid0", 00:09:54.441 "superblock": true, 00:09:54.441 "num_base_bdevs": 4, 00:09:54.441 "num_base_bdevs_discovered": 2, 00:09:54.441 "num_base_bdevs_operational": 4, 00:09:54.441 "base_bdevs_list": [ 00:09:54.441 { 00:09:54.441 "name": "BaseBdev1", 00:09:54.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.441 "is_configured": false, 00:09:54.441 "data_offset": 0, 00:09:54.441 "data_size": 0 00:09:54.441 }, 00:09:54.441 { 00:09:54.441 "name": null, 00:09:54.441 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:54.441 "is_configured": false, 00:09:54.441 "data_offset": 0, 00:09:54.441 "data_size": 63488 00:09:54.441 }, 00:09:54.441 { 00:09:54.441 "name": "BaseBdev3", 00:09:54.441 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:54.441 "is_configured": true, 00:09:54.441 "data_offset": 2048, 00:09:54.441 "data_size": 63488 00:09:54.441 }, 00:09:54.441 { 00:09:54.441 "name": "BaseBdev4", 00:09:54.441 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:54.441 "is_configured": true, 00:09:54.441 "data_offset": 2048, 00:09:54.441 "data_size": 63488 00:09:54.441 } 00:09:54.441 ] 00:09:54.441 }' 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.441 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 [2024-11-26 22:54:33.797732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.702 BaseBdev1 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.702 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 [ 00:09:54.702 { 00:09:54.702 "name": "BaseBdev1", 00:09:54.702 "aliases": [ 00:09:54.702 "85ca4c9c-aa94-4dd3-9862-f329cc5c7617" 00:09:54.702 ], 00:09:54.702 "product_name": "Malloc disk", 00:09:54.702 "block_size": 512, 00:09:54.702 "num_blocks": 65536, 00:09:54.702 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:54.702 "assigned_rate_limits": { 00:09:54.702 "rw_ios_per_sec": 0, 00:09:54.702 "rw_mbytes_per_sec": 0, 00:09:54.702 "r_mbytes_per_sec": 0, 00:09:54.978 "w_mbytes_per_sec": 0 00:09:54.978 }, 00:09:54.978 "claimed": true, 00:09:54.978 "claim_type": "exclusive_write", 00:09:54.978 "zoned": false, 00:09:54.978 "supported_io_types": { 00:09:54.978 "read": true, 00:09:54.978 "write": true, 00:09:54.978 "unmap": true, 00:09:54.978 "flush": true, 00:09:54.978 "reset": true, 00:09:54.978 "nvme_admin": false, 00:09:54.978 "nvme_io": false, 00:09:54.978 "nvme_io_md": false, 00:09:54.978 "write_zeroes": true, 00:09:54.978 "zcopy": true, 00:09:54.978 "get_zone_info": false, 00:09:54.978 "zone_management": false, 00:09:54.978 "zone_append": false, 00:09:54.978 "compare": false, 00:09:54.978 "compare_and_write": false, 00:09:54.978 "abort": true, 00:09:54.978 "seek_hole": false, 00:09:54.978 "seek_data": false, 00:09:54.978 "copy": true, 00:09:54.978 "nvme_iov_md": false 00:09:54.978 }, 00:09:54.978 "memory_domains": [ 00:09:54.978 { 00:09:54.978 "dma_device_id": "system", 00:09:54.978 "dma_device_type": 1 00:09:54.978 }, 00:09:54.978 { 00:09:54.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.978 "dma_device_type": 2 00:09:54.978 } 00:09:54.978 ], 00:09:54.978 "driver_specific": {} 00:09:54.978 } 00:09:54.978 ] 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.978 "name": "Existed_Raid", 00:09:54.978 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:54.978 "strip_size_kb": 64, 00:09:54.978 "state": "configuring", 00:09:54.978 "raid_level": "raid0", 00:09:54.978 "superblock": true, 00:09:54.978 "num_base_bdevs": 4, 00:09:54.978 "num_base_bdevs_discovered": 3, 00:09:54.978 "num_base_bdevs_operational": 4, 00:09:54.978 "base_bdevs_list": [ 00:09:54.978 { 00:09:54.978 "name": "BaseBdev1", 00:09:54.978 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:54.978 "is_configured": true, 00:09:54.978 "data_offset": 2048, 00:09:54.978 "data_size": 63488 00:09:54.978 }, 00:09:54.978 { 00:09:54.978 "name": null, 00:09:54.978 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:54.978 "is_configured": false, 00:09:54.978 "data_offset": 0, 00:09:54.978 "data_size": 63488 00:09:54.978 }, 00:09:54.978 { 00:09:54.978 "name": "BaseBdev3", 00:09:54.978 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:54.978 "is_configured": true, 00:09:54.978 "data_offset": 2048, 00:09:54.978 "data_size": 63488 00:09:54.978 }, 00:09:54.978 { 00:09:54.978 "name": "BaseBdev4", 00:09:54.978 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:54.978 "is_configured": true, 00:09:54.978 "data_offset": 2048, 00:09:54.978 "data_size": 63488 00:09:54.978 } 00:09:54.978 ] 00:09:54.978 }' 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.978 22:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.248 [2024-11-26 22:54:34.313913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.248 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.507 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.507 "name": "Existed_Raid", 00:09:55.507 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:55.507 "strip_size_kb": 64, 00:09:55.507 "state": "configuring", 00:09:55.507 "raid_level": "raid0", 00:09:55.507 "superblock": true, 00:09:55.507 "num_base_bdevs": 4, 00:09:55.507 "num_base_bdevs_discovered": 2, 00:09:55.507 "num_base_bdevs_operational": 4, 00:09:55.507 "base_bdevs_list": [ 00:09:55.507 { 00:09:55.507 "name": "BaseBdev1", 00:09:55.507 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:55.507 "is_configured": true, 00:09:55.507 "data_offset": 2048, 00:09:55.507 "data_size": 63488 00:09:55.507 }, 00:09:55.507 { 00:09:55.507 "name": null, 00:09:55.507 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:55.507 "is_configured": false, 00:09:55.507 "data_offset": 0, 00:09:55.507 "data_size": 63488 00:09:55.507 }, 00:09:55.507 { 00:09:55.507 "name": null, 00:09:55.507 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:55.507 "is_configured": false, 00:09:55.507 "data_offset": 0, 00:09:55.507 "data_size": 63488 00:09:55.507 }, 00:09:55.507 { 00:09:55.507 "name": "BaseBdev4", 00:09:55.507 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:55.507 "is_configured": true, 00:09:55.507 "data_offset": 2048, 00:09:55.507 "data_size": 63488 00:09:55.507 } 00:09:55.507 ] 00:09:55.507 }' 00:09:55.507 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.507 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.767 [2024-11-26 22:54:34.774074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.767 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.767 "name": "Existed_Raid", 00:09:55.767 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:55.767 "strip_size_kb": 64, 00:09:55.767 "state": "configuring", 00:09:55.767 "raid_level": "raid0", 00:09:55.767 "superblock": true, 00:09:55.767 "num_base_bdevs": 4, 00:09:55.767 "num_base_bdevs_discovered": 3, 00:09:55.767 "num_base_bdevs_operational": 4, 00:09:55.767 "base_bdevs_list": [ 00:09:55.767 { 00:09:55.767 "name": "BaseBdev1", 00:09:55.767 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:55.767 "is_configured": true, 00:09:55.767 "data_offset": 2048, 00:09:55.767 "data_size": 63488 00:09:55.768 }, 00:09:55.768 { 00:09:55.768 "name": null, 00:09:55.768 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:55.768 "is_configured": false, 00:09:55.768 "data_offset": 0, 00:09:55.768 "data_size": 63488 00:09:55.768 }, 00:09:55.768 { 00:09:55.768 "name": "BaseBdev3", 00:09:55.768 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:55.768 "is_configured": true, 00:09:55.768 "data_offset": 2048, 00:09:55.768 "data_size": 63488 00:09:55.768 }, 00:09:55.768 { 00:09:55.768 "name": "BaseBdev4", 00:09:55.768 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:55.768 "is_configured": true, 00:09:55.768 "data_offset": 2048, 00:09:55.768 "data_size": 63488 00:09:55.768 } 00:09:55.768 ] 00:09:55.768 }' 00:09:55.768 22:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.768 22:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.334 [2024-11-26 22:54:35.254201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.334 "name": "Existed_Raid", 00:09:56.334 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:56.334 "strip_size_kb": 64, 00:09:56.334 "state": "configuring", 00:09:56.334 "raid_level": "raid0", 00:09:56.334 "superblock": true, 00:09:56.334 "num_base_bdevs": 4, 00:09:56.334 "num_base_bdevs_discovered": 2, 00:09:56.334 "num_base_bdevs_operational": 4, 00:09:56.334 "base_bdevs_list": [ 00:09:56.334 { 00:09:56.334 "name": null, 00:09:56.334 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:56.334 "is_configured": false, 00:09:56.334 "data_offset": 0, 00:09:56.334 "data_size": 63488 00:09:56.334 }, 00:09:56.334 { 00:09:56.334 "name": null, 00:09:56.334 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:56.334 "is_configured": false, 00:09:56.334 "data_offset": 0, 00:09:56.334 "data_size": 63488 00:09:56.334 }, 00:09:56.334 { 00:09:56.334 "name": "BaseBdev3", 00:09:56.334 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:56.334 "is_configured": true, 00:09:56.334 "data_offset": 2048, 00:09:56.334 "data_size": 63488 00:09:56.334 }, 00:09:56.334 { 00:09:56.334 "name": "BaseBdev4", 00:09:56.334 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:56.334 "is_configured": true, 00:09:56.334 "data_offset": 2048, 00:09:56.334 "data_size": 63488 00:09:56.334 } 00:09:56.334 ] 00:09:56.334 }' 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.334 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.592 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.592 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.592 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.592 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 [2024-11-26 22:54:35.742184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.851 "name": "Existed_Raid", 00:09:56.851 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:56.851 "strip_size_kb": 64, 00:09:56.851 "state": "configuring", 00:09:56.851 "raid_level": "raid0", 00:09:56.851 "superblock": true, 00:09:56.851 "num_base_bdevs": 4, 00:09:56.851 "num_base_bdevs_discovered": 3, 00:09:56.851 "num_base_bdevs_operational": 4, 00:09:56.851 "base_bdevs_list": [ 00:09:56.851 { 00:09:56.851 "name": null, 00:09:56.851 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:56.851 "is_configured": false, 00:09:56.851 "data_offset": 0, 00:09:56.851 "data_size": 63488 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "name": "BaseBdev2", 00:09:56.851 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "name": "BaseBdev3", 00:09:56.851 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "name": "BaseBdev4", 00:09:56.851 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 } 00:09:56.851 ] 00:09:56.851 }' 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.851 22:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.110 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.110 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.110 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.110 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85ca4c9c-aa94-4dd3-9862-f329cc5c7617 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.369 [2024-11-26 22:54:36.338326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.369 [2024-11-26 22:54:36.338622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.369 [2024-11-26 22:54:36.338685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.369 NewBaseBdev 00:09:57.369 [2024-11-26 22:54:36.338995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:09:57.369 [2024-11-26 22:54:36.339134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.369 [2024-11-26 22:54:36.339145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:57.369 [2024-11-26 22:54:36.339292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.369 [ 00:09:57.369 { 00:09:57.369 "name": "NewBaseBdev", 00:09:57.369 "aliases": [ 00:09:57.369 "85ca4c9c-aa94-4dd3-9862-f329cc5c7617" 00:09:57.369 ], 00:09:57.369 "product_name": "Malloc disk", 00:09:57.369 "block_size": 512, 00:09:57.369 "num_blocks": 65536, 00:09:57.369 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:57.369 "assigned_rate_limits": { 00:09:57.369 "rw_ios_per_sec": 0, 00:09:57.369 "rw_mbytes_per_sec": 0, 00:09:57.369 "r_mbytes_per_sec": 0, 00:09:57.369 "w_mbytes_per_sec": 0 00:09:57.369 }, 00:09:57.369 "claimed": true, 00:09:57.369 "claim_type": "exclusive_write", 00:09:57.369 "zoned": false, 00:09:57.369 "supported_io_types": { 00:09:57.369 "read": true, 00:09:57.369 "write": true, 00:09:57.369 "unmap": true, 00:09:57.369 "flush": true, 00:09:57.369 "reset": true, 00:09:57.369 "nvme_admin": false, 00:09:57.369 "nvme_io": false, 00:09:57.369 "nvme_io_md": false, 00:09:57.369 "write_zeroes": true, 00:09:57.369 "zcopy": true, 00:09:57.369 "get_zone_info": false, 00:09:57.369 "zone_management": false, 00:09:57.369 "zone_append": false, 00:09:57.369 "compare": false, 00:09:57.369 "compare_and_write": false, 00:09:57.369 "abort": true, 00:09:57.369 "seek_hole": false, 00:09:57.369 "seek_data": false, 00:09:57.369 "copy": true, 00:09:57.369 "nvme_iov_md": false 00:09:57.369 }, 00:09:57.369 "memory_domains": [ 00:09:57.369 { 00:09:57.369 "dma_device_id": "system", 00:09:57.369 "dma_device_type": 1 00:09:57.369 }, 00:09:57.369 { 00:09:57.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.369 "dma_device_type": 2 00:09:57.369 } 00:09:57.369 ], 00:09:57.369 "driver_specific": {} 00:09:57.369 } 00:09:57.369 ] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.369 "name": "Existed_Raid", 00:09:57.369 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:57.369 "strip_size_kb": 64, 00:09:57.369 "state": "online", 00:09:57.369 "raid_level": "raid0", 00:09:57.369 "superblock": true, 00:09:57.369 "num_base_bdevs": 4, 00:09:57.369 "num_base_bdevs_discovered": 4, 00:09:57.369 "num_base_bdevs_operational": 4, 00:09:57.369 "base_bdevs_list": [ 00:09:57.369 { 00:09:57.369 "name": "NewBaseBdev", 00:09:57.369 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:57.369 "is_configured": true, 00:09:57.369 "data_offset": 2048, 00:09:57.369 "data_size": 63488 00:09:57.369 }, 00:09:57.369 { 00:09:57.369 "name": "BaseBdev2", 00:09:57.369 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:57.369 "is_configured": true, 00:09:57.369 "data_offset": 2048, 00:09:57.369 "data_size": 63488 00:09:57.369 }, 00:09:57.369 { 00:09:57.369 "name": "BaseBdev3", 00:09:57.369 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:57.369 "is_configured": true, 00:09:57.369 "data_offset": 2048, 00:09:57.369 "data_size": 63488 00:09:57.369 }, 00:09:57.369 { 00:09:57.369 "name": "BaseBdev4", 00:09:57.369 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:57.369 "is_configured": true, 00:09:57.369 "data_offset": 2048, 00:09:57.369 "data_size": 63488 00:09:57.369 } 00:09:57.369 ] 00:09:57.369 }' 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.369 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.937 [2024-11-26 22:54:36.834762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.937 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.937 "name": "Existed_Raid", 00:09:57.937 "aliases": [ 00:09:57.937 "603b879e-b5a0-4f63-be25-5a6fc68a3bae" 00:09:57.937 ], 00:09:57.937 "product_name": "Raid Volume", 00:09:57.937 "block_size": 512, 00:09:57.937 "num_blocks": 253952, 00:09:57.937 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:57.937 "assigned_rate_limits": { 00:09:57.937 "rw_ios_per_sec": 0, 00:09:57.937 "rw_mbytes_per_sec": 0, 00:09:57.937 "r_mbytes_per_sec": 0, 00:09:57.937 "w_mbytes_per_sec": 0 00:09:57.937 }, 00:09:57.937 "claimed": false, 00:09:57.938 "zoned": false, 00:09:57.938 "supported_io_types": { 00:09:57.938 "read": true, 00:09:57.938 "write": true, 00:09:57.938 "unmap": true, 00:09:57.938 "flush": true, 00:09:57.938 "reset": true, 00:09:57.938 "nvme_admin": false, 00:09:57.938 "nvme_io": false, 00:09:57.938 "nvme_io_md": false, 00:09:57.938 "write_zeroes": true, 00:09:57.938 "zcopy": false, 00:09:57.938 "get_zone_info": false, 00:09:57.938 "zone_management": false, 00:09:57.938 "zone_append": false, 00:09:57.938 "compare": false, 00:09:57.938 "compare_and_write": false, 00:09:57.938 "abort": false, 00:09:57.938 "seek_hole": false, 00:09:57.938 "seek_data": false, 00:09:57.938 "copy": false, 00:09:57.938 "nvme_iov_md": false 00:09:57.938 }, 00:09:57.938 "memory_domains": [ 00:09:57.938 { 00:09:57.938 "dma_device_id": "system", 00:09:57.938 "dma_device_type": 1 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.938 "dma_device_type": 2 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "system", 00:09:57.938 "dma_device_type": 1 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.938 "dma_device_type": 2 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "system", 00:09:57.938 "dma_device_type": 1 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.938 "dma_device_type": 2 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "system", 00:09:57.938 "dma_device_type": 1 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.938 "dma_device_type": 2 00:09:57.938 } 00:09:57.938 ], 00:09:57.938 "driver_specific": { 00:09:57.938 "raid": { 00:09:57.938 "uuid": "603b879e-b5a0-4f63-be25-5a6fc68a3bae", 00:09:57.938 "strip_size_kb": 64, 00:09:57.938 "state": "online", 00:09:57.938 "raid_level": "raid0", 00:09:57.938 "superblock": true, 00:09:57.938 "num_base_bdevs": 4, 00:09:57.938 "num_base_bdevs_discovered": 4, 00:09:57.938 "num_base_bdevs_operational": 4, 00:09:57.938 "base_bdevs_list": [ 00:09:57.938 { 00:09:57.938 "name": "NewBaseBdev", 00:09:57.938 "uuid": "85ca4c9c-aa94-4dd3-9862-f329cc5c7617", 00:09:57.938 "is_configured": true, 00:09:57.938 "data_offset": 2048, 00:09:57.938 "data_size": 63488 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "name": "BaseBdev2", 00:09:57.938 "uuid": "31627237-e556-4d73-97f3-6afde847c79d", 00:09:57.938 "is_configured": true, 00:09:57.938 "data_offset": 2048, 00:09:57.938 "data_size": 63488 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "name": "BaseBdev3", 00:09:57.938 "uuid": "e83fdd9a-a47c-4b51-90f7-2b772234f584", 00:09:57.938 "is_configured": true, 00:09:57.938 "data_offset": 2048, 00:09:57.938 "data_size": 63488 00:09:57.938 }, 00:09:57.938 { 00:09:57.938 "name": "BaseBdev4", 00:09:57.938 "uuid": "94b17eeb-698e-4a31-b019-c98af984320b", 00:09:57.938 "is_configured": true, 00:09:57.938 "data_offset": 2048, 00:09:57.938 "data_size": 63488 00:09:57.938 } 00:09:57.938 ] 00:09:57.938 } 00:09:57.938 } 00:09:57.938 }' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:57.938 BaseBdev2 00:09:57.938 BaseBdev3 00:09:57.938 BaseBdev4' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.938 22:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.938 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 [2024-11-26 22:54:37.186562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.196 [2024-11-26 22:54:37.186637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.196 [2024-11-26 22:54:37.186739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.196 [2024-11-26 22:54:37.186861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.196 [2024-11-26 22:54:37.186925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.196 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82607 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82607 ']' 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82607 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82607 00:09:58.197 killing process with pid 82607 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82607' 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82607 00:09:58.197 [2024-11-26 22:54:37.218499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.197 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82607 00:09:58.197 [2024-11-26 22:54:37.295350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.764 22:54:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:58.764 00:09:58.764 real 0m9.618s 00:09:58.764 user 0m16.120s 00:09:58.764 sys 0m2.143s 00:09:58.764 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.764 ************************************ 00:09:58.764 END TEST raid_state_function_test_sb 00:09:58.764 ************************************ 00:09:58.764 22:54:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.764 22:54:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:58.764 22:54:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.764 22:54:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.764 22:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.764 ************************************ 00:09:58.764 START TEST raid_superblock_test 00:09:58.764 ************************************ 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83255 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83255 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83255 ']' 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.764 22:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 [2024-11-26 22:54:37.804311] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:58.765 [2024-11-26 22:54:37.804520] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83255 ] 00:09:59.024 [2024-11-26 22:54:37.941045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.024 [2024-11-26 22:54:37.981475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.024 [2024-11-26 22:54:38.019133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.024 [2024-11-26 22:54:38.095346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.024 [2024-11-26 22:54:38.095404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.594 malloc1 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.594 [2024-11-26 22:54:38.649238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.594 [2024-11-26 22:54:38.649379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.594 [2024-11-26 22:54:38.649442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:59.594 [2024-11-26 22:54:38.649497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.594 [2024-11-26 22:54:38.651964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.594 [2024-11-26 22:54:38.652061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.594 pt1 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.594 malloc2 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.594 [2024-11-26 22:54:38.687771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.594 [2024-11-26 22:54:38.687890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.594 [2024-11-26 22:54:38.687931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:59.594 [2024-11-26 22:54:38.687977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.594 [2024-11-26 22:54:38.690406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.594 [2024-11-26 22:54:38.690485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.594 pt2 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.594 malloc3 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.594 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 [2024-11-26 22:54:38.722244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.854 [2024-11-26 22:54:38.722376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.854 [2024-11-26 22:54:38.722423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:59.854 [2024-11-26 22:54:38.722467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.854 [2024-11-26 22:54:38.724811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.854 [2024-11-26 22:54:38.724903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.854 pt3 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 malloc4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 [2024-11-26 22:54:38.778915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:59.854 [2024-11-26 22:54:38.779080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.854 [2024-11-26 22:54:38.779160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:59.854 [2024-11-26 22:54:38.779227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.854 [2024-11-26 22:54:38.782909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.854 [2024-11-26 22:54:38.783026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:59.854 pt4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 [2024-11-26 22:54:38.791360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.854 [2024-11-26 22:54:38.793735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.854 [2024-11-26 22:54:38.793856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.854 [2024-11-26 22:54:38.793912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:59.854 [2024-11-26 22:54:38.794099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:59.854 [2024-11-26 22:54:38.794114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:59.854 [2024-11-26 22:54:38.794456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:59.854 [2024-11-26 22:54:38.794643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:59.854 [2024-11-26 22:54:38.794661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:59.854 [2024-11-26 22:54:38.794801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.854 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.854 "name": "raid_bdev1", 00:09:59.854 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:09:59.854 "strip_size_kb": 64, 00:09:59.854 "state": "online", 00:09:59.854 "raid_level": "raid0", 00:09:59.854 "superblock": true, 00:09:59.854 "num_base_bdevs": 4, 00:09:59.854 "num_base_bdevs_discovered": 4, 00:09:59.854 "num_base_bdevs_operational": 4, 00:09:59.854 "base_bdevs_list": [ 00:09:59.854 { 00:09:59.855 "name": "pt1", 00:09:59.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.855 "is_configured": true, 00:09:59.855 "data_offset": 2048, 00:09:59.855 "data_size": 63488 00:09:59.855 }, 00:09:59.855 { 00:09:59.855 "name": "pt2", 00:09:59.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.855 "is_configured": true, 00:09:59.855 "data_offset": 2048, 00:09:59.855 "data_size": 63488 00:09:59.855 }, 00:09:59.855 { 00:09:59.855 "name": "pt3", 00:09:59.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.855 "is_configured": true, 00:09:59.855 "data_offset": 2048, 00:09:59.855 "data_size": 63488 00:09:59.855 }, 00:09:59.855 { 00:09:59.855 "name": "pt4", 00:09:59.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:59.855 "is_configured": true, 00:09:59.855 "data_offset": 2048, 00:09:59.855 "data_size": 63488 00:09:59.855 } 00:09:59.855 ] 00:09:59.855 }' 00:09:59.855 22:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.855 22:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.113 [2024-11-26 22:54:39.203695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.113 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.371 "name": "raid_bdev1", 00:10:00.371 "aliases": [ 00:10:00.371 "afc594a0-dc1c-4213-a9ac-a61445dac268" 00:10:00.371 ], 00:10:00.371 "product_name": "Raid Volume", 00:10:00.371 "block_size": 512, 00:10:00.371 "num_blocks": 253952, 00:10:00.371 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:00.371 "assigned_rate_limits": { 00:10:00.371 "rw_ios_per_sec": 0, 00:10:00.371 "rw_mbytes_per_sec": 0, 00:10:00.371 "r_mbytes_per_sec": 0, 00:10:00.371 "w_mbytes_per_sec": 0 00:10:00.371 }, 00:10:00.371 "claimed": false, 00:10:00.371 "zoned": false, 00:10:00.371 "supported_io_types": { 00:10:00.371 "read": true, 00:10:00.371 "write": true, 00:10:00.371 "unmap": true, 00:10:00.371 "flush": true, 00:10:00.371 "reset": true, 00:10:00.371 "nvme_admin": false, 00:10:00.371 "nvme_io": false, 00:10:00.371 "nvme_io_md": false, 00:10:00.371 "write_zeroes": true, 00:10:00.371 "zcopy": false, 00:10:00.371 "get_zone_info": false, 00:10:00.371 "zone_management": false, 00:10:00.371 "zone_append": false, 00:10:00.371 "compare": false, 00:10:00.371 "compare_and_write": false, 00:10:00.371 "abort": false, 00:10:00.371 "seek_hole": false, 00:10:00.371 "seek_data": false, 00:10:00.371 "copy": false, 00:10:00.371 "nvme_iov_md": false 00:10:00.371 }, 00:10:00.371 "memory_domains": [ 00:10:00.371 { 00:10:00.371 "dma_device_id": "system", 00:10:00.371 "dma_device_type": 1 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.371 "dma_device_type": 2 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "system", 00:10:00.371 "dma_device_type": 1 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.371 "dma_device_type": 2 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "system", 00:10:00.371 "dma_device_type": 1 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.371 "dma_device_type": 2 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "system", 00:10:00.371 "dma_device_type": 1 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.371 "dma_device_type": 2 00:10:00.371 } 00:10:00.371 ], 00:10:00.371 "driver_specific": { 00:10:00.371 "raid": { 00:10:00.371 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:00.371 "strip_size_kb": 64, 00:10:00.371 "state": "online", 00:10:00.371 "raid_level": "raid0", 00:10:00.371 "superblock": true, 00:10:00.371 "num_base_bdevs": 4, 00:10:00.371 "num_base_bdevs_discovered": 4, 00:10:00.371 "num_base_bdevs_operational": 4, 00:10:00.371 "base_bdevs_list": [ 00:10:00.371 { 00:10:00.371 "name": "pt1", 00:10:00.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.371 "is_configured": true, 00:10:00.371 "data_offset": 2048, 00:10:00.371 "data_size": 63488 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "name": "pt2", 00:10:00.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.371 "is_configured": true, 00:10:00.371 "data_offset": 2048, 00:10:00.371 "data_size": 63488 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "name": "pt3", 00:10:00.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.371 "is_configured": true, 00:10:00.371 "data_offset": 2048, 00:10:00.371 "data_size": 63488 00:10:00.371 }, 00:10:00.371 { 00:10:00.371 "name": "pt4", 00:10:00.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:00.371 "is_configured": true, 00:10:00.371 "data_offset": 2048, 00:10:00.371 "data_size": 63488 00:10:00.371 } 00:10:00.371 ] 00:10:00.371 } 00:10:00.371 } 00:10:00.371 }' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:00.371 pt2 00:10:00.371 pt3 00:10:00.371 pt4' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.371 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.372 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 [2024-11-26 22:54:39.547732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=afc594a0-dc1c-4213-a9ac-a61445dac268 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z afc594a0-dc1c-4213-a9ac-a61445dac268 ']' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 [2024-11-26 22:54:39.579458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.630 [2024-11-26 22:54:39.579530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.630 [2024-11-26 22:54:39.579625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.630 [2024-11-26 22:54:39.579703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.630 [2024-11-26 22:54:39.579723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.630 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.630 [2024-11-26 22:54:39.747551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:00.630 [2024-11-26 22:54:39.749740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:00.630 [2024-11-26 22:54:39.749852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:00.630 [2024-11-26 22:54:39.749909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:00.630 [2024-11-26 22:54:39.749992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:00.630 [2024-11-26 22:54:39.750092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:00.630 [2024-11-26 22:54:39.750114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:00.630 [2024-11-26 22:54:39.750136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:00.630 [2024-11-26 22:54:39.750151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.630 [2024-11-26 22:54:39.750163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:00.630 request: 00:10:00.630 { 00:10:00.630 "name": "raid_bdev1", 00:10:00.630 "raid_level": "raid0", 00:10:00.630 "base_bdevs": [ 00:10:00.630 "malloc1", 00:10:00.630 "malloc2", 00:10:00.630 "malloc3", 00:10:00.630 "malloc4" 00:10:00.630 ], 00:10:00.630 "strip_size_kb": 64, 00:10:00.630 "superblock": false, 00:10:00.630 "method": "bdev_raid_create", 00:10:00.630 "req_id": 1 00:10:00.630 } 00:10:00.630 Got JSON-RPC error response 00:10:00.630 response: 00:10:00.630 { 00:10:00.630 "code": -17, 00:10:00.630 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:00.630 } 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.889 [2024-11-26 22:54:39.811522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.889 [2024-11-26 22:54:39.811578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.889 [2024-11-26 22:54:39.811596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:00.889 [2024-11-26 22:54:39.811610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.889 [2024-11-26 22:54:39.814022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.889 [2024-11-26 22:54:39.814067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.889 [2024-11-26 22:54:39.814133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:00.889 [2024-11-26 22:54:39.814174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.889 pt1 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.889 "name": "raid_bdev1", 00:10:00.889 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:00.889 "strip_size_kb": 64, 00:10:00.889 "state": "configuring", 00:10:00.889 "raid_level": "raid0", 00:10:00.889 "superblock": true, 00:10:00.889 "num_base_bdevs": 4, 00:10:00.889 "num_base_bdevs_discovered": 1, 00:10:00.889 "num_base_bdevs_operational": 4, 00:10:00.889 "base_bdevs_list": [ 00:10:00.889 { 00:10:00.889 "name": "pt1", 00:10:00.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.889 "is_configured": true, 00:10:00.889 "data_offset": 2048, 00:10:00.889 "data_size": 63488 00:10:00.889 }, 00:10:00.889 { 00:10:00.889 "name": null, 00:10:00.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.889 "is_configured": false, 00:10:00.889 "data_offset": 2048, 00:10:00.889 "data_size": 63488 00:10:00.889 }, 00:10:00.889 { 00:10:00.889 "name": null, 00:10:00.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.889 "is_configured": false, 00:10:00.889 "data_offset": 2048, 00:10:00.889 "data_size": 63488 00:10:00.889 }, 00:10:00.889 { 00:10:00.889 "name": null, 00:10:00.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:00.889 "is_configured": false, 00:10:00.889 "data_offset": 2048, 00:10:00.889 "data_size": 63488 00:10:00.889 } 00:10:00.889 ] 00:10:00.889 }' 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.889 22:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.148 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:01.148 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.148 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.148 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.408 [2024-11-26 22:54:40.275663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.408 [2024-11-26 22:54:40.275718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.408 [2024-11-26 22:54:40.275738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:01.408 [2024-11-26 22:54:40.275750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.408 [2024-11-26 22:54:40.276145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.408 [2024-11-26 22:54:40.276177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.408 [2024-11-26 22:54:40.276243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:01.408 [2024-11-26 22:54:40.276287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.408 pt2 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.408 [2024-11-26 22:54:40.287677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.408 "name": "raid_bdev1", 00:10:01.408 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:01.408 "strip_size_kb": 64, 00:10:01.408 "state": "configuring", 00:10:01.408 "raid_level": "raid0", 00:10:01.408 "superblock": true, 00:10:01.408 "num_base_bdevs": 4, 00:10:01.408 "num_base_bdevs_discovered": 1, 00:10:01.408 "num_base_bdevs_operational": 4, 00:10:01.408 "base_bdevs_list": [ 00:10:01.408 { 00:10:01.408 "name": "pt1", 00:10:01.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.408 "is_configured": true, 00:10:01.408 "data_offset": 2048, 00:10:01.408 "data_size": 63488 00:10:01.408 }, 00:10:01.408 { 00:10:01.408 "name": null, 00:10:01.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.408 "is_configured": false, 00:10:01.408 "data_offset": 0, 00:10:01.408 "data_size": 63488 00:10:01.408 }, 00:10:01.408 { 00:10:01.408 "name": null, 00:10:01.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.408 "is_configured": false, 00:10:01.408 "data_offset": 2048, 00:10:01.408 "data_size": 63488 00:10:01.408 }, 00:10:01.408 { 00:10:01.408 "name": null, 00:10:01.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:01.408 "is_configured": false, 00:10:01.408 "data_offset": 2048, 00:10:01.408 "data_size": 63488 00:10:01.408 } 00:10:01.408 ] 00:10:01.408 }' 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.408 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.668 [2024-11-26 22:54:40.723780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.668 [2024-11-26 22:54:40.723836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.668 [2024-11-26 22:54:40.723856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:01.668 [2024-11-26 22:54:40.723866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.668 [2024-11-26 22:54:40.724240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.668 [2024-11-26 22:54:40.724276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.668 [2024-11-26 22:54:40.724346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:01.668 [2024-11-26 22:54:40.724373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.668 pt2 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.668 [2024-11-26 22:54:40.735783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.668 [2024-11-26 22:54:40.735835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.668 [2024-11-26 22:54:40.735853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:01.668 [2024-11-26 22:54:40.735863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.668 [2024-11-26 22:54:40.736237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.668 [2024-11-26 22:54:40.736273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.668 [2024-11-26 22:54:40.736336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:01.668 [2024-11-26 22:54:40.736365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:01.668 pt3 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.668 [2024-11-26 22:54:40.747783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:01.668 [2024-11-26 22:54:40.747828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.668 [2024-11-26 22:54:40.747845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:01.668 [2024-11-26 22:54:40.747854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.668 [2024-11-26 22:54:40.748221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.668 [2024-11-26 22:54:40.748260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:01.668 [2024-11-26 22:54:40.748326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:01.668 [2024-11-26 22:54:40.748349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:01.668 [2024-11-26 22:54:40.748455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:01.668 [2024-11-26 22:54:40.748473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:01.668 [2024-11-26 22:54:40.748733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:01.668 [2024-11-26 22:54:40.748874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:01.668 [2024-11-26 22:54:40.748897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:01.668 [2024-11-26 22:54:40.748997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.668 pt4 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.668 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.928 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.928 "name": "raid_bdev1", 00:10:01.928 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:01.928 "strip_size_kb": 64, 00:10:01.928 "state": "online", 00:10:01.928 "raid_level": "raid0", 00:10:01.928 "superblock": true, 00:10:01.928 "num_base_bdevs": 4, 00:10:01.928 "num_base_bdevs_discovered": 4, 00:10:01.928 "num_base_bdevs_operational": 4, 00:10:01.928 "base_bdevs_list": [ 00:10:01.928 { 00:10:01.928 "name": "pt1", 00:10:01.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.928 "is_configured": true, 00:10:01.928 "data_offset": 2048, 00:10:01.928 "data_size": 63488 00:10:01.928 }, 00:10:01.928 { 00:10:01.928 "name": "pt2", 00:10:01.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.928 "is_configured": true, 00:10:01.928 "data_offset": 2048, 00:10:01.928 "data_size": 63488 00:10:01.928 }, 00:10:01.928 { 00:10:01.928 "name": "pt3", 00:10:01.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.928 "is_configured": true, 00:10:01.928 "data_offset": 2048, 00:10:01.928 "data_size": 63488 00:10:01.928 }, 00:10:01.928 { 00:10:01.928 "name": "pt4", 00:10:01.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:01.928 "is_configured": true, 00:10:01.928 "data_offset": 2048, 00:10:01.928 "data_size": 63488 00:10:01.928 } 00:10:01.928 ] 00:10:01.928 }' 00:10:01.928 22:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.928 22:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.187 [2024-11-26 22:54:41.184188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.187 "name": "raid_bdev1", 00:10:02.187 "aliases": [ 00:10:02.187 "afc594a0-dc1c-4213-a9ac-a61445dac268" 00:10:02.187 ], 00:10:02.187 "product_name": "Raid Volume", 00:10:02.187 "block_size": 512, 00:10:02.187 "num_blocks": 253952, 00:10:02.187 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:02.187 "assigned_rate_limits": { 00:10:02.187 "rw_ios_per_sec": 0, 00:10:02.187 "rw_mbytes_per_sec": 0, 00:10:02.187 "r_mbytes_per_sec": 0, 00:10:02.187 "w_mbytes_per_sec": 0 00:10:02.187 }, 00:10:02.187 "claimed": false, 00:10:02.187 "zoned": false, 00:10:02.187 "supported_io_types": { 00:10:02.187 "read": true, 00:10:02.187 "write": true, 00:10:02.187 "unmap": true, 00:10:02.187 "flush": true, 00:10:02.187 "reset": true, 00:10:02.187 "nvme_admin": false, 00:10:02.187 "nvme_io": false, 00:10:02.187 "nvme_io_md": false, 00:10:02.187 "write_zeroes": true, 00:10:02.187 "zcopy": false, 00:10:02.187 "get_zone_info": false, 00:10:02.187 "zone_management": false, 00:10:02.187 "zone_append": false, 00:10:02.187 "compare": false, 00:10:02.187 "compare_and_write": false, 00:10:02.187 "abort": false, 00:10:02.187 "seek_hole": false, 00:10:02.187 "seek_data": false, 00:10:02.187 "copy": false, 00:10:02.187 "nvme_iov_md": false 00:10:02.187 }, 00:10:02.187 "memory_domains": [ 00:10:02.187 { 00:10:02.187 "dma_device_id": "system", 00:10:02.187 "dma_device_type": 1 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.187 "dma_device_type": 2 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "system", 00:10:02.187 "dma_device_type": 1 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.187 "dma_device_type": 2 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "system", 00:10:02.187 "dma_device_type": 1 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.187 "dma_device_type": 2 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "system", 00:10:02.187 "dma_device_type": 1 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.187 "dma_device_type": 2 00:10:02.187 } 00:10:02.187 ], 00:10:02.187 "driver_specific": { 00:10:02.187 "raid": { 00:10:02.187 "uuid": "afc594a0-dc1c-4213-a9ac-a61445dac268", 00:10:02.187 "strip_size_kb": 64, 00:10:02.187 "state": "online", 00:10:02.187 "raid_level": "raid0", 00:10:02.187 "superblock": true, 00:10:02.187 "num_base_bdevs": 4, 00:10:02.187 "num_base_bdevs_discovered": 4, 00:10:02.187 "num_base_bdevs_operational": 4, 00:10:02.187 "base_bdevs_list": [ 00:10:02.187 { 00:10:02.187 "name": "pt1", 00:10:02.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.187 "is_configured": true, 00:10:02.187 "data_offset": 2048, 00:10:02.187 "data_size": 63488 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "name": "pt2", 00:10:02.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.187 "is_configured": true, 00:10:02.187 "data_offset": 2048, 00:10:02.187 "data_size": 63488 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "name": "pt3", 00:10:02.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.187 "is_configured": true, 00:10:02.187 "data_offset": 2048, 00:10:02.187 "data_size": 63488 00:10:02.187 }, 00:10:02.187 { 00:10:02.187 "name": "pt4", 00:10:02.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:02.187 "is_configured": true, 00:10:02.187 "data_offset": 2048, 00:10:02.187 "data_size": 63488 00:10:02.187 } 00:10:02.187 ] 00:10:02.187 } 00:10:02.187 } 00:10:02.187 }' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:02.187 pt2 00:10:02.187 pt3 00:10:02.187 pt4' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.187 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:02.446 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.447 [2024-11-26 22:54:41.488264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' afc594a0-dc1c-4213-a9ac-a61445dac268 '!=' afc594a0-dc1c-4213-a9ac-a61445dac268 ']' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83255 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83255 ']' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83255 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83255 00:10:02.447 killing process with pid 83255 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83255' 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83255 00:10:02.447 [2024-11-26 22:54:41.569878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.447 [2024-11-26 22:54:41.569951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.447 [2024-11-26 22:54:41.570031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.447 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83255 00:10:02.447 [2024-11-26 22:54:41.570041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:02.705 [2024-11-26 22:54:41.649578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.966 22:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:02.966 00:10:02.966 real 0m4.274s 00:10:02.966 user 0m6.505s 00:10:02.966 sys 0m1.030s 00:10:02.966 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.966 22:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.966 ************************************ 00:10:02.966 END TEST raid_superblock_test 00:10:02.966 ************************************ 00:10:02.966 22:54:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:02.966 22:54:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:02.966 22:54:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.966 22:54:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.966 ************************************ 00:10:02.966 START TEST raid_read_error_test 00:10:02.966 ************************************ 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.966 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ImuaSGniko 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83503 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83503 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83503 ']' 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.967 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 [2024-11-26 22:54:42.171554] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:03.226 [2024-11-26 22:54:42.172033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83503 ] 00:10:03.226 [2024-11-26 22:54:42.310835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.226 [2024-11-26 22:54:42.351039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.486 [2024-11-26 22:54:42.390512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.487 [2024-11-26 22:54:42.466697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.487 [2024-11-26 22:54:42.466759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.056 22:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.056 BaseBdev1_malloc 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.056 true 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.056 [2024-11-26 22:54:43.036583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.056 [2024-11-26 22:54:43.036670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.056 [2024-11-26 22:54:43.036692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.056 [2024-11-26 22:54:43.036717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.056 [2024-11-26 22:54:43.039236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.056 [2024-11-26 22:54:43.039286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.056 BaseBdev1 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.056 BaseBdev2_malloc 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.056 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 true 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 [2024-11-26 22:54:43.083101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.057 [2024-11-26 22:54:43.083153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.057 [2024-11-26 22:54:43.083171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.057 [2024-11-26 22:54:43.083185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.057 [2024-11-26 22:54:43.085641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.057 [2024-11-26 22:54:43.085702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.057 BaseBdev2 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 BaseBdev3_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 true 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 [2024-11-26 22:54:43.129723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.057 [2024-11-26 22:54:43.129775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.057 [2024-11-26 22:54:43.129794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:04.057 [2024-11-26 22:54:43.129808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.057 [2024-11-26 22:54:43.132174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.057 [2024-11-26 22:54:43.132225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.057 BaseBdev3 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.057 BaseBdev4_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.057 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 true 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 [2024-11-26 22:54:43.197493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:04.316 [2024-11-26 22:54:43.197551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.316 [2024-11-26 22:54:43.197571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:04.316 [2024-11-26 22:54:43.197586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.316 [2024-11-26 22:54:43.200046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.316 [2024-11-26 22:54:43.200089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:04.316 BaseBdev4 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 [2024-11-26 22:54:43.209552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.316 [2024-11-26 22:54:43.211733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.316 [2024-11-26 22:54:43.211821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.316 [2024-11-26 22:54:43.211880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.316 [2024-11-26 22:54:43.212099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.316 [2024-11-26 22:54:43.212124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.316 [2024-11-26 22:54:43.212401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:04.316 [2024-11-26 22:54:43.212590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.316 [2024-11-26 22:54:43.212609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.316 [2024-11-26 22:54:43.212753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.316 "name": "raid_bdev1", 00:10:04.316 "uuid": "974e0bed-c7ad-425e-b622-98edf9eeb12e", 00:10:04.316 "strip_size_kb": 64, 00:10:04.316 "state": "online", 00:10:04.316 "raid_level": "raid0", 00:10:04.316 "superblock": true, 00:10:04.316 "num_base_bdevs": 4, 00:10:04.316 "num_base_bdevs_discovered": 4, 00:10:04.316 "num_base_bdevs_operational": 4, 00:10:04.316 "base_bdevs_list": [ 00:10:04.316 { 00:10:04.316 "name": "BaseBdev1", 00:10:04.316 "uuid": "1ef338bb-567c-57a0-9039-33c0e4a3767f", 00:10:04.316 "is_configured": true, 00:10:04.316 "data_offset": 2048, 00:10:04.316 "data_size": 63488 00:10:04.316 }, 00:10:04.316 { 00:10:04.316 "name": "BaseBdev2", 00:10:04.316 "uuid": "4230cd93-afcc-5fe7-91bf-701579d714ce", 00:10:04.316 "is_configured": true, 00:10:04.316 "data_offset": 2048, 00:10:04.316 "data_size": 63488 00:10:04.316 }, 00:10:04.316 { 00:10:04.316 "name": "BaseBdev3", 00:10:04.316 "uuid": "f1b4df38-84a1-55ff-9fa7-6b7ee7815f0c", 00:10:04.316 "is_configured": true, 00:10:04.316 "data_offset": 2048, 00:10:04.316 "data_size": 63488 00:10:04.316 }, 00:10:04.316 { 00:10:04.316 "name": "BaseBdev4", 00:10:04.316 "uuid": "ba3553df-dca6-53b7-ab5c-464663655798", 00:10:04.316 "is_configured": true, 00:10:04.316 "data_offset": 2048, 00:10:04.316 "data_size": 63488 00:10:04.316 } 00:10:04.316 ] 00:10:04.316 }' 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.316 22:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.575 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:04.575 22:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:04.835 [2024-11-26 22:54:43.726083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.797 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.798 "name": "raid_bdev1", 00:10:05.798 "uuid": "974e0bed-c7ad-425e-b622-98edf9eeb12e", 00:10:05.798 "strip_size_kb": 64, 00:10:05.798 "state": "online", 00:10:05.798 "raid_level": "raid0", 00:10:05.798 "superblock": true, 00:10:05.798 "num_base_bdevs": 4, 00:10:05.798 "num_base_bdevs_discovered": 4, 00:10:05.798 "num_base_bdevs_operational": 4, 00:10:05.798 "base_bdevs_list": [ 00:10:05.798 { 00:10:05.798 "name": "BaseBdev1", 00:10:05.798 "uuid": "1ef338bb-567c-57a0-9039-33c0e4a3767f", 00:10:05.798 "is_configured": true, 00:10:05.798 "data_offset": 2048, 00:10:05.798 "data_size": 63488 00:10:05.798 }, 00:10:05.798 { 00:10:05.798 "name": "BaseBdev2", 00:10:05.798 "uuid": "4230cd93-afcc-5fe7-91bf-701579d714ce", 00:10:05.798 "is_configured": true, 00:10:05.798 "data_offset": 2048, 00:10:05.798 "data_size": 63488 00:10:05.798 }, 00:10:05.798 { 00:10:05.798 "name": "BaseBdev3", 00:10:05.798 "uuid": "f1b4df38-84a1-55ff-9fa7-6b7ee7815f0c", 00:10:05.798 "is_configured": true, 00:10:05.798 "data_offset": 2048, 00:10:05.798 "data_size": 63488 00:10:05.798 }, 00:10:05.798 { 00:10:05.798 "name": "BaseBdev4", 00:10:05.798 "uuid": "ba3553df-dca6-53b7-ab5c-464663655798", 00:10:05.798 "is_configured": true, 00:10:05.798 "data_offset": 2048, 00:10:05.798 "data_size": 63488 00:10:05.798 } 00:10:05.798 ] 00:10:05.798 }' 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.798 22:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.058 [2024-11-26 22:54:45.097236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.058 [2024-11-26 22:54:45.097295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.058 [2024-11-26 22:54:45.099937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.058 [2024-11-26 22:54:45.100024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.058 [2024-11-26 22:54:45.100078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.058 [2024-11-26 22:54:45.100092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:06.058 { 00:10:06.058 "results": [ 00:10:06.058 { 00:10:06.058 "job": "raid_bdev1", 00:10:06.058 "core_mask": "0x1", 00:10:06.058 "workload": "randrw", 00:10:06.058 "percentage": 50, 00:10:06.058 "status": "finished", 00:10:06.058 "queue_depth": 1, 00:10:06.058 "io_size": 131072, 00:10:06.058 "runtime": 1.369094, 00:10:06.058 "iops": 14374.469539710202, 00:10:06.058 "mibps": 1796.8086924637753, 00:10:06.058 "io_failed": 1, 00:10:06.058 "io_timeout": 0, 00:10:06.058 "avg_latency_us": 97.27341810497677, 00:10:06.058 "min_latency_us": 25.77181208053691, 00:10:06.058 "max_latency_us": 1299.5241000610129 00:10:06.058 } 00:10:06.058 ], 00:10:06.058 "core_count": 1 00:10:06.058 } 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83503 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83503 ']' 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83503 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83503 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.058 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.058 killing process with pid 83503 00:10:06.059 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83503' 00:10:06.059 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83503 00:10:06.059 [2024-11-26 22:54:45.144620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.059 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83503 00:10:06.319 [2024-11-26 22:54:45.209868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ImuaSGniko 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:06.579 00:10:06.579 real 0m3.490s 00:10:06.579 user 0m4.219s 00:10:06.579 sys 0m0.667s 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.579 22:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.579 ************************************ 00:10:06.579 END TEST raid_read_error_test 00:10:06.579 ************************************ 00:10:06.579 22:54:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:06.579 22:54:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.579 22:54:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.579 22:54:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.579 ************************************ 00:10:06.579 START TEST raid_write_error_test 00:10:06.579 ************************************ 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.579 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mWiI57amWV 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83639 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83639 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83639 ']' 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.580 22:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.840 [2024-11-26 22:54:45.739345] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:06.840 [2024-11-26 22:54:45.739465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83639 ] 00:10:06.840 [2024-11-26 22:54:45.875558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.840 [2024-11-26 22:54:45.916169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.840 [2024-11-26 22:54:45.954827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.100 [2024-11-26 22:54:46.030919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.100 [2024-11-26 22:54:46.030997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 BaseBdev1_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 true 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 [2024-11-26 22:54:46.612823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:07.670 [2024-11-26 22:54:46.612899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.670 [2024-11-26 22:54:46.612919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:07.670 [2024-11-26 22:54:46.612945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.670 [2024-11-26 22:54:46.615444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.670 [2024-11-26 22:54:46.615490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:07.670 BaseBdev1 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 BaseBdev2_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 true 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 [2024-11-26 22:54:46.659485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.670 [2024-11-26 22:54:46.659537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.670 [2024-11-26 22:54:46.659556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:07.670 [2024-11-26 22:54:46.659570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.670 [2024-11-26 22:54:46.661940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.670 [2024-11-26 22:54:46.661980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.670 BaseBdev2 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 BaseBdev3_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 true 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.670 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.670 [2024-11-26 22:54:46.705954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:07.670 [2024-11-26 22:54:46.706023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.670 [2024-11-26 22:54:46.706042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:07.671 [2024-11-26 22:54:46.706055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.671 [2024-11-26 22:54:46.708403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.671 [2024-11-26 22:54:46.708442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:07.671 BaseBdev3 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.671 BaseBdev4_malloc 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.671 true 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.671 [2024-11-26 22:54:46.770783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:07.671 [2024-11-26 22:54:46.770844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.671 [2024-11-26 22:54:46.770865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:07.671 [2024-11-26 22:54:46.770879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.671 [2024-11-26 22:54:46.773276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.671 [2024-11-26 22:54:46.773316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:07.671 BaseBdev4 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.671 [2024-11-26 22:54:46.782841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.671 [2024-11-26 22:54:46.784948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.671 [2024-11-26 22:54:46.785043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.671 [2024-11-26 22:54:46.785101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.671 [2024-11-26 22:54:46.785344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.671 [2024-11-26 22:54:46.785373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.671 [2024-11-26 22:54:46.785646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:07.671 [2024-11-26 22:54:46.785829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.671 [2024-11-26 22:54:46.785848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:07.671 [2024-11-26 22:54:46.785974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.671 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.932 "name": "raid_bdev1", 00:10:07.932 "uuid": "de0455fc-57da-4cc4-9f8a-b89f27ae84f6", 00:10:07.932 "strip_size_kb": 64, 00:10:07.932 "state": "online", 00:10:07.932 "raid_level": "raid0", 00:10:07.932 "superblock": true, 00:10:07.932 "num_base_bdevs": 4, 00:10:07.932 "num_base_bdevs_discovered": 4, 00:10:07.932 "num_base_bdevs_operational": 4, 00:10:07.932 "base_bdevs_list": [ 00:10:07.932 { 00:10:07.932 "name": "BaseBdev1", 00:10:07.932 "uuid": "81e1ea39-78ec-5ee1-945d-4d9d7e028d28", 00:10:07.932 "is_configured": true, 00:10:07.932 "data_offset": 2048, 00:10:07.932 "data_size": 63488 00:10:07.932 }, 00:10:07.932 { 00:10:07.932 "name": "BaseBdev2", 00:10:07.932 "uuid": "06edafa8-e087-5f6f-8fe5-403d8b2152df", 00:10:07.932 "is_configured": true, 00:10:07.932 "data_offset": 2048, 00:10:07.932 "data_size": 63488 00:10:07.932 }, 00:10:07.932 { 00:10:07.932 "name": "BaseBdev3", 00:10:07.932 "uuid": "321bd98d-0283-5271-8324-4c344962f57f", 00:10:07.932 "is_configured": true, 00:10:07.932 "data_offset": 2048, 00:10:07.932 "data_size": 63488 00:10:07.932 }, 00:10:07.932 { 00:10:07.932 "name": "BaseBdev4", 00:10:07.932 "uuid": "d9b43919-a0ef-5a71-aaf4-69e670a68516", 00:10:07.932 "is_configured": true, 00:10:07.932 "data_offset": 2048, 00:10:07.932 "data_size": 63488 00:10:07.932 } 00:10:07.932 ] 00:10:07.932 }' 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.932 22:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.192 22:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.192 22:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.192 [2024-11-26 22:54:47.311442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.132 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.133 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.133 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.133 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.133 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.393 "name": "raid_bdev1", 00:10:09.393 "uuid": "de0455fc-57da-4cc4-9f8a-b89f27ae84f6", 00:10:09.393 "strip_size_kb": 64, 00:10:09.393 "state": "online", 00:10:09.393 "raid_level": "raid0", 00:10:09.393 "superblock": true, 00:10:09.393 "num_base_bdevs": 4, 00:10:09.393 "num_base_bdevs_discovered": 4, 00:10:09.393 "num_base_bdevs_operational": 4, 00:10:09.393 "base_bdevs_list": [ 00:10:09.393 { 00:10:09.393 "name": "BaseBdev1", 00:10:09.393 "uuid": "81e1ea39-78ec-5ee1-945d-4d9d7e028d28", 00:10:09.393 "is_configured": true, 00:10:09.393 "data_offset": 2048, 00:10:09.393 "data_size": 63488 00:10:09.393 }, 00:10:09.393 { 00:10:09.393 "name": "BaseBdev2", 00:10:09.393 "uuid": "06edafa8-e087-5f6f-8fe5-403d8b2152df", 00:10:09.393 "is_configured": true, 00:10:09.393 "data_offset": 2048, 00:10:09.393 "data_size": 63488 00:10:09.393 }, 00:10:09.393 { 00:10:09.393 "name": "BaseBdev3", 00:10:09.393 "uuid": "321bd98d-0283-5271-8324-4c344962f57f", 00:10:09.393 "is_configured": true, 00:10:09.393 "data_offset": 2048, 00:10:09.393 "data_size": 63488 00:10:09.393 }, 00:10:09.393 { 00:10:09.393 "name": "BaseBdev4", 00:10:09.393 "uuid": "d9b43919-a0ef-5a71-aaf4-69e670a68516", 00:10:09.393 "is_configured": true, 00:10:09.393 "data_offset": 2048, 00:10:09.393 "data_size": 63488 00:10:09.393 } 00:10:09.393 ] 00:10:09.393 }' 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.393 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 [2024-11-26 22:54:48.704089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.654 [2024-11-26 22:54:48.704138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.654 [2024-11-26 22:54:48.706775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.654 [2024-11-26 22:54:48.706873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.654 [2024-11-26 22:54:48.706928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.654 [2024-11-26 22:54:48.706958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:09.654 { 00:10:09.654 "results": [ 00:10:09.654 { 00:10:09.654 "job": "raid_bdev1", 00:10:09.654 "core_mask": "0x1", 00:10:09.654 "workload": "randrw", 00:10:09.654 "percentage": 50, 00:10:09.654 "status": "finished", 00:10:09.654 "queue_depth": 1, 00:10:09.654 "io_size": 131072, 00:10:09.654 "runtime": 1.390554, 00:10:09.654 "iops": 14074.246667155681, 00:10:09.654 "mibps": 1759.2808333944602, 00:10:09.654 "io_failed": 1, 00:10:09.654 "io_timeout": 0, 00:10:09.654 "avg_latency_us": 99.78447697309186, 00:10:09.654 "min_latency_us": 25.883378366599842, 00:10:09.654 "max_latency_us": 1463.7496731456463 00:10:09.654 } 00:10:09.654 ], 00:10:09.654 "core_count": 1 00:10:09.654 } 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83639 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83639 ']' 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83639 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83639 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83639' 00:10:09.654 killing process with pid 83639 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83639 00:10:09.654 [2024-11-26 22:54:48.753841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.654 22:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83639 00:10:09.914 [2024-11-26 22:54:48.820100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mWiI57amWV 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:10.181 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.182 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.182 22:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:10.182 00:10:10.182 real 0m3.531s 00:10:10.182 user 0m4.275s 00:10:10.182 sys 0m0.672s 00:10:10.182 22:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.182 22:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 ************************************ 00:10:10.182 END TEST raid_write_error_test 00:10:10.182 ************************************ 00:10:10.182 22:54:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:10.182 22:54:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:10.182 22:54:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.182 22:54:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.182 22:54:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 ************************************ 00:10:10.182 START TEST raid_state_function_test 00:10:10.182 ************************************ 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.182 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83771 00:10:10.183 Process raid pid: 83771 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83771' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83771 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83771 ']' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.183 22:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.445 [2024-11-26 22:54:49.341594] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:10.445 [2024-11-26 22:54:49.341724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.445 [2024-11-26 22:54:49.479081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:10.445 [2024-11-26 22:54:49.516168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.445 [2024-11-26 22:54:49.555557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.705 [2024-11-26 22:54:49.631639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.705 [2024-11-26 22:54:49.631700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.274 [2024-11-26 22:54:50.174714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.274 [2024-11-26 22:54:50.174779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.274 [2024-11-26 22:54:50.174806] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.274 [2024-11-26 22:54:50.174817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.274 [2024-11-26 22:54:50.174831] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.274 [2024-11-26 22:54:50.174840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.274 [2024-11-26 22:54:50.174851] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.274 [2024-11-26 22:54:50.174859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.274 "name": "Existed_Raid", 00:10:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.274 "strip_size_kb": 64, 00:10:11.274 "state": "configuring", 00:10:11.274 "raid_level": "concat", 00:10:11.274 "superblock": false, 00:10:11.274 "num_base_bdevs": 4, 00:10:11.274 "num_base_bdevs_discovered": 0, 00:10:11.274 "num_base_bdevs_operational": 4, 00:10:11.274 "base_bdevs_list": [ 00:10:11.274 { 00:10:11.274 "name": "BaseBdev1", 00:10:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.274 "is_configured": false, 00:10:11.274 "data_offset": 0, 00:10:11.274 "data_size": 0 00:10:11.274 }, 00:10:11.274 { 00:10:11.274 "name": "BaseBdev2", 00:10:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.274 "is_configured": false, 00:10:11.274 "data_offset": 0, 00:10:11.274 "data_size": 0 00:10:11.274 }, 00:10:11.274 { 00:10:11.274 "name": "BaseBdev3", 00:10:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.274 "is_configured": false, 00:10:11.274 "data_offset": 0, 00:10:11.274 "data_size": 0 00:10:11.274 }, 00:10:11.274 { 00:10:11.274 "name": "BaseBdev4", 00:10:11.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.274 "is_configured": false, 00:10:11.274 "data_offset": 0, 00:10:11.274 "data_size": 0 00:10:11.274 } 00:10:11.274 ] 00:10:11.274 }' 00:10:11.274 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.275 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.534 [2024-11-26 22:54:50.642693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.534 [2024-11-26 22:54:50.642748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.534 [2024-11-26 22:54:50.654745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.534 [2024-11-26 22:54:50.654789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.534 [2024-11-26 22:54:50.654803] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.534 [2024-11-26 22:54:50.654812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.534 [2024-11-26 22:54:50.654823] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.534 [2024-11-26 22:54:50.654832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.534 [2024-11-26 22:54:50.654842] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.534 [2024-11-26 22:54:50.654851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.534 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 [2024-11-26 22:54:50.681860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.793 BaseBdev1 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.793 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 [ 00:10:11.793 { 00:10:11.793 "name": "BaseBdev1", 00:10:11.793 "aliases": [ 00:10:11.793 "a1567ff7-cda7-46df-a232-f1290fd252d1" 00:10:11.793 ], 00:10:11.793 "product_name": "Malloc disk", 00:10:11.793 "block_size": 512, 00:10:11.793 "num_blocks": 65536, 00:10:11.793 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:11.793 "assigned_rate_limits": { 00:10:11.793 "rw_ios_per_sec": 0, 00:10:11.793 "rw_mbytes_per_sec": 0, 00:10:11.793 "r_mbytes_per_sec": 0, 00:10:11.793 "w_mbytes_per_sec": 0 00:10:11.793 }, 00:10:11.793 "claimed": true, 00:10:11.793 "claim_type": "exclusive_write", 00:10:11.793 "zoned": false, 00:10:11.793 "supported_io_types": { 00:10:11.793 "read": true, 00:10:11.793 "write": true, 00:10:11.793 "unmap": true, 00:10:11.793 "flush": true, 00:10:11.793 "reset": true, 00:10:11.793 "nvme_admin": false, 00:10:11.793 "nvme_io": false, 00:10:11.793 "nvme_io_md": false, 00:10:11.793 "write_zeroes": true, 00:10:11.793 "zcopy": true, 00:10:11.793 "get_zone_info": false, 00:10:11.793 "zone_management": false, 00:10:11.793 "zone_append": false, 00:10:11.793 "compare": false, 00:10:11.793 "compare_and_write": false, 00:10:11.793 "abort": true, 00:10:11.794 "seek_hole": false, 00:10:11.794 "seek_data": false, 00:10:11.794 "copy": true, 00:10:11.794 "nvme_iov_md": false 00:10:11.794 }, 00:10:11.794 "memory_domains": [ 00:10:11.794 { 00:10:11.794 "dma_device_id": "system", 00:10:11.794 "dma_device_type": 1 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.794 "dma_device_type": 2 00:10:11.794 } 00:10:11.794 ], 00:10:11.794 "driver_specific": {} 00:10:11.794 } 00:10:11.794 ] 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.794 "name": "Existed_Raid", 00:10:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.794 "strip_size_kb": 64, 00:10:11.794 "state": "configuring", 00:10:11.794 "raid_level": "concat", 00:10:11.794 "superblock": false, 00:10:11.794 "num_base_bdevs": 4, 00:10:11.794 "num_base_bdevs_discovered": 1, 00:10:11.794 "num_base_bdevs_operational": 4, 00:10:11.794 "base_bdevs_list": [ 00:10:11.794 { 00:10:11.794 "name": "BaseBdev1", 00:10:11.794 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:11.794 "is_configured": true, 00:10:11.794 "data_offset": 0, 00:10:11.794 "data_size": 65536 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev2", 00:10:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.794 "is_configured": false, 00:10:11.794 "data_offset": 0, 00:10:11.794 "data_size": 0 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev3", 00:10:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.794 "is_configured": false, 00:10:11.794 "data_offset": 0, 00:10:11.794 "data_size": 0 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev4", 00:10:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.794 "is_configured": false, 00:10:11.794 "data_offset": 0, 00:10:11.794 "data_size": 0 00:10:11.794 } 00:10:11.794 ] 00:10:11.794 }' 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.794 22:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 [2024-11-26 22:54:51.149989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.054 [2024-11-26 22:54:51.150050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 [2024-11-26 22:54:51.162051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.054 [2024-11-26 22:54:51.164217] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.054 [2024-11-26 22:54:51.164271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.054 [2024-11-26 22:54:51.164284] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.054 [2024-11-26 22:54:51.164294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.054 [2024-11-26 22:54:51.164303] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.054 [2024-11-26 22:54:51.164312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.312 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.312 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.312 "name": "Existed_Raid", 00:10:12.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.312 "strip_size_kb": 64, 00:10:12.312 "state": "configuring", 00:10:12.312 "raid_level": "concat", 00:10:12.312 "superblock": false, 00:10:12.312 "num_base_bdevs": 4, 00:10:12.312 "num_base_bdevs_discovered": 1, 00:10:12.312 "num_base_bdevs_operational": 4, 00:10:12.312 "base_bdevs_list": [ 00:10:12.312 { 00:10:12.312 "name": "BaseBdev1", 00:10:12.312 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:12.312 "is_configured": true, 00:10:12.312 "data_offset": 0, 00:10:12.312 "data_size": 65536 00:10:12.312 }, 00:10:12.312 { 00:10:12.312 "name": "BaseBdev2", 00:10:12.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.312 "is_configured": false, 00:10:12.312 "data_offset": 0, 00:10:12.312 "data_size": 0 00:10:12.312 }, 00:10:12.312 { 00:10:12.312 "name": "BaseBdev3", 00:10:12.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.312 "is_configured": false, 00:10:12.312 "data_offset": 0, 00:10:12.312 "data_size": 0 00:10:12.312 }, 00:10:12.312 { 00:10:12.312 "name": "BaseBdev4", 00:10:12.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.312 "is_configured": false, 00:10:12.312 "data_offset": 0, 00:10:12.312 "data_size": 0 00:10:12.312 } 00:10:12.312 ] 00:10:12.312 }' 00:10:12.312 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.312 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 [2024-11-26 22:54:51.646929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.570 BaseBdev2 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 [ 00:10:12.570 { 00:10:12.570 "name": "BaseBdev2", 00:10:12.570 "aliases": [ 00:10:12.570 "bfdf97dc-85dd-4220-98d3-2688a94be3ab" 00:10:12.570 ], 00:10:12.570 "product_name": "Malloc disk", 00:10:12.570 "block_size": 512, 00:10:12.570 "num_blocks": 65536, 00:10:12.570 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:12.570 "assigned_rate_limits": { 00:10:12.570 "rw_ios_per_sec": 0, 00:10:12.570 "rw_mbytes_per_sec": 0, 00:10:12.570 "r_mbytes_per_sec": 0, 00:10:12.570 "w_mbytes_per_sec": 0 00:10:12.570 }, 00:10:12.570 "claimed": true, 00:10:12.570 "claim_type": "exclusive_write", 00:10:12.570 "zoned": false, 00:10:12.570 "supported_io_types": { 00:10:12.570 "read": true, 00:10:12.570 "write": true, 00:10:12.570 "unmap": true, 00:10:12.570 "flush": true, 00:10:12.570 "reset": true, 00:10:12.570 "nvme_admin": false, 00:10:12.570 "nvme_io": false, 00:10:12.570 "nvme_io_md": false, 00:10:12.570 "write_zeroes": true, 00:10:12.570 "zcopy": true, 00:10:12.570 "get_zone_info": false, 00:10:12.570 "zone_management": false, 00:10:12.570 "zone_append": false, 00:10:12.570 "compare": false, 00:10:12.570 "compare_and_write": false, 00:10:12.570 "abort": true, 00:10:12.570 "seek_hole": false, 00:10:12.570 "seek_data": false, 00:10:12.570 "copy": true, 00:10:12.570 "nvme_iov_md": false 00:10:12.570 }, 00:10:12.570 "memory_domains": [ 00:10:12.570 { 00:10:12.570 "dma_device_id": "system", 00:10:12.570 "dma_device_type": 1 00:10:12.570 }, 00:10:12.570 { 00:10:12.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.570 "dma_device_type": 2 00:10:12.570 } 00:10:12.570 ], 00:10:12.570 "driver_specific": {} 00:10:12.570 } 00:10:12.570 ] 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.570 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.864 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.864 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.864 "name": "Existed_Raid", 00:10:12.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.864 "strip_size_kb": 64, 00:10:12.864 "state": "configuring", 00:10:12.864 "raid_level": "concat", 00:10:12.864 "superblock": false, 00:10:12.864 "num_base_bdevs": 4, 00:10:12.864 "num_base_bdevs_discovered": 2, 00:10:12.864 "num_base_bdevs_operational": 4, 00:10:12.864 "base_bdevs_list": [ 00:10:12.864 { 00:10:12.864 "name": "BaseBdev1", 00:10:12.864 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:12.864 "is_configured": true, 00:10:12.864 "data_offset": 0, 00:10:12.864 "data_size": 65536 00:10:12.864 }, 00:10:12.864 { 00:10:12.864 "name": "BaseBdev2", 00:10:12.864 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:12.864 "is_configured": true, 00:10:12.864 "data_offset": 0, 00:10:12.864 "data_size": 65536 00:10:12.864 }, 00:10:12.864 { 00:10:12.864 "name": "BaseBdev3", 00:10:12.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.864 "is_configured": false, 00:10:12.864 "data_offset": 0, 00:10:12.864 "data_size": 0 00:10:12.864 }, 00:10:12.864 { 00:10:12.865 "name": "BaseBdev4", 00:10:12.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.865 "is_configured": false, 00:10:12.865 "data_offset": 0, 00:10:12.865 "data_size": 0 00:10:12.865 } 00:10:12.865 ] 00:10:12.865 }' 00:10:12.865 22:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.865 22:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.124 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.124 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 [2024-11-26 22:54:52.140648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.125 BaseBdev3 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.125 [ 00:10:13.125 { 00:10:13.125 "name": "BaseBdev3", 00:10:13.125 "aliases": [ 00:10:13.125 "7648c83c-c64f-4808-ac11-035270e9e0b0" 00:10:13.125 ], 00:10:13.125 "product_name": "Malloc disk", 00:10:13.125 "block_size": 512, 00:10:13.125 "num_blocks": 65536, 00:10:13.125 "uuid": "7648c83c-c64f-4808-ac11-035270e9e0b0", 00:10:13.125 "assigned_rate_limits": { 00:10:13.125 "rw_ios_per_sec": 0, 00:10:13.125 "rw_mbytes_per_sec": 0, 00:10:13.125 "r_mbytes_per_sec": 0, 00:10:13.125 "w_mbytes_per_sec": 0 00:10:13.125 }, 00:10:13.125 "claimed": true, 00:10:13.125 "claim_type": "exclusive_write", 00:10:13.125 "zoned": false, 00:10:13.125 "supported_io_types": { 00:10:13.125 "read": true, 00:10:13.125 "write": true, 00:10:13.125 "unmap": true, 00:10:13.125 "flush": true, 00:10:13.125 "reset": true, 00:10:13.125 "nvme_admin": false, 00:10:13.125 "nvme_io": false, 00:10:13.125 "nvme_io_md": false, 00:10:13.125 "write_zeroes": true, 00:10:13.125 "zcopy": true, 00:10:13.125 "get_zone_info": false, 00:10:13.125 "zone_management": false, 00:10:13.125 "zone_append": false, 00:10:13.125 "compare": false, 00:10:13.125 "compare_and_write": false, 00:10:13.125 "abort": true, 00:10:13.125 "seek_hole": false, 00:10:13.125 "seek_data": false, 00:10:13.125 "copy": true, 00:10:13.125 "nvme_iov_md": false 00:10:13.125 }, 00:10:13.125 "memory_domains": [ 00:10:13.125 { 00:10:13.125 "dma_device_id": "system", 00:10:13.125 "dma_device_type": 1 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.125 "dma_device_type": 2 00:10:13.125 } 00:10:13.125 ], 00:10:13.125 "driver_specific": {} 00:10:13.125 } 00:10:13.125 ] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.125 "name": "Existed_Raid", 00:10:13.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.125 "strip_size_kb": 64, 00:10:13.125 "state": "configuring", 00:10:13.125 "raid_level": "concat", 00:10:13.125 "superblock": false, 00:10:13.125 "num_base_bdevs": 4, 00:10:13.125 "num_base_bdevs_discovered": 3, 00:10:13.125 "num_base_bdevs_operational": 4, 00:10:13.125 "base_bdevs_list": [ 00:10:13.125 { 00:10:13.125 "name": "BaseBdev1", 00:10:13.125 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:13.125 "is_configured": true, 00:10:13.125 "data_offset": 0, 00:10:13.125 "data_size": 65536 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "name": "BaseBdev2", 00:10:13.125 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:13.125 "is_configured": true, 00:10:13.125 "data_offset": 0, 00:10:13.125 "data_size": 65536 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "name": "BaseBdev3", 00:10:13.125 "uuid": "7648c83c-c64f-4808-ac11-035270e9e0b0", 00:10:13.125 "is_configured": true, 00:10:13.125 "data_offset": 0, 00:10:13.125 "data_size": 65536 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "name": "BaseBdev4", 00:10:13.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.125 "is_configured": false, 00:10:13.125 "data_offset": 0, 00:10:13.125 "data_size": 0 00:10:13.125 } 00:10:13.125 ] 00:10:13.125 }' 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.125 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 [2024-11-26 22:54:52.661708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.697 [2024-11-26 22:54:52.661776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:13.697 [2024-11-26 22:54:52.661795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:13.697 [2024-11-26 22:54:52.662148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:13.697 [2024-11-26 22:54:52.662353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:13.697 [2024-11-26 22:54:52.662373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:13.697 [2024-11-26 22:54:52.662654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.697 BaseBdev4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 [ 00:10:13.697 { 00:10:13.697 "name": "BaseBdev4", 00:10:13.697 "aliases": [ 00:10:13.697 "7a7a2d2d-ca86-4c8b-997b-0185808746d9" 00:10:13.697 ], 00:10:13.697 "product_name": "Malloc disk", 00:10:13.697 "block_size": 512, 00:10:13.697 "num_blocks": 65536, 00:10:13.697 "uuid": "7a7a2d2d-ca86-4c8b-997b-0185808746d9", 00:10:13.697 "assigned_rate_limits": { 00:10:13.697 "rw_ios_per_sec": 0, 00:10:13.697 "rw_mbytes_per_sec": 0, 00:10:13.697 "r_mbytes_per_sec": 0, 00:10:13.697 "w_mbytes_per_sec": 0 00:10:13.697 }, 00:10:13.697 "claimed": true, 00:10:13.697 "claim_type": "exclusive_write", 00:10:13.697 "zoned": false, 00:10:13.697 "supported_io_types": { 00:10:13.697 "read": true, 00:10:13.697 "write": true, 00:10:13.697 "unmap": true, 00:10:13.697 "flush": true, 00:10:13.697 "reset": true, 00:10:13.697 "nvme_admin": false, 00:10:13.697 "nvme_io": false, 00:10:13.697 "nvme_io_md": false, 00:10:13.697 "write_zeroes": true, 00:10:13.697 "zcopy": true, 00:10:13.697 "get_zone_info": false, 00:10:13.697 "zone_management": false, 00:10:13.697 "zone_append": false, 00:10:13.697 "compare": false, 00:10:13.697 "compare_and_write": false, 00:10:13.697 "abort": true, 00:10:13.697 "seek_hole": false, 00:10:13.697 "seek_data": false, 00:10:13.697 "copy": true, 00:10:13.697 "nvme_iov_md": false 00:10:13.697 }, 00:10:13.697 "memory_domains": [ 00:10:13.697 { 00:10:13.697 "dma_device_id": "system", 00:10:13.697 "dma_device_type": 1 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.697 "dma_device_type": 2 00:10:13.697 } 00:10:13.697 ], 00:10:13.697 "driver_specific": {} 00:10:13.697 } 00:10:13.697 ] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.697 "name": "Existed_Raid", 00:10:13.697 "uuid": "03ff35a2-1152-4195-9fce-2e911eb11208", 00:10:13.697 "strip_size_kb": 64, 00:10:13.697 "state": "online", 00:10:13.697 "raid_level": "concat", 00:10:13.697 "superblock": false, 00:10:13.697 "num_base_bdevs": 4, 00:10:13.697 "num_base_bdevs_discovered": 4, 00:10:13.697 "num_base_bdevs_operational": 4, 00:10:13.697 "base_bdevs_list": [ 00:10:13.697 { 00:10:13.697 "name": "BaseBdev1", 00:10:13.697 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 65536 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": "BaseBdev2", 00:10:13.697 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 65536 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": "BaseBdev3", 00:10:13.697 "uuid": "7648c83c-c64f-4808-ac11-035270e9e0b0", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 65536 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": "BaseBdev4", 00:10:13.697 "uuid": "7a7a2d2d-ca86-4c8b-997b-0185808746d9", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 65536 00:10:13.697 } 00:10:13.697 ] 00:10:13.697 }' 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.697 22:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.266 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.266 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 [2024-11-26 22:54:53.146153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.267 "name": "Existed_Raid", 00:10:14.267 "aliases": [ 00:10:14.267 "03ff35a2-1152-4195-9fce-2e911eb11208" 00:10:14.267 ], 00:10:14.267 "product_name": "Raid Volume", 00:10:14.267 "block_size": 512, 00:10:14.267 "num_blocks": 262144, 00:10:14.267 "uuid": "03ff35a2-1152-4195-9fce-2e911eb11208", 00:10:14.267 "assigned_rate_limits": { 00:10:14.267 "rw_ios_per_sec": 0, 00:10:14.267 "rw_mbytes_per_sec": 0, 00:10:14.267 "r_mbytes_per_sec": 0, 00:10:14.267 "w_mbytes_per_sec": 0 00:10:14.267 }, 00:10:14.267 "claimed": false, 00:10:14.267 "zoned": false, 00:10:14.267 "supported_io_types": { 00:10:14.267 "read": true, 00:10:14.267 "write": true, 00:10:14.267 "unmap": true, 00:10:14.267 "flush": true, 00:10:14.267 "reset": true, 00:10:14.267 "nvme_admin": false, 00:10:14.267 "nvme_io": false, 00:10:14.267 "nvme_io_md": false, 00:10:14.267 "write_zeroes": true, 00:10:14.267 "zcopy": false, 00:10:14.267 "get_zone_info": false, 00:10:14.267 "zone_management": false, 00:10:14.267 "zone_append": false, 00:10:14.267 "compare": false, 00:10:14.267 "compare_and_write": false, 00:10:14.267 "abort": false, 00:10:14.267 "seek_hole": false, 00:10:14.267 "seek_data": false, 00:10:14.267 "copy": false, 00:10:14.267 "nvme_iov_md": false 00:10:14.267 }, 00:10:14.267 "memory_domains": [ 00:10:14.267 { 00:10:14.267 "dma_device_id": "system", 00:10:14.267 "dma_device_type": 1 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.267 "dma_device_type": 2 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "system", 00:10:14.267 "dma_device_type": 1 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.267 "dma_device_type": 2 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "system", 00:10:14.267 "dma_device_type": 1 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.267 "dma_device_type": 2 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "system", 00:10:14.267 "dma_device_type": 1 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.267 "dma_device_type": 2 00:10:14.267 } 00:10:14.267 ], 00:10:14.267 "driver_specific": { 00:10:14.267 "raid": { 00:10:14.267 "uuid": "03ff35a2-1152-4195-9fce-2e911eb11208", 00:10:14.267 "strip_size_kb": 64, 00:10:14.267 "state": "online", 00:10:14.267 "raid_level": "concat", 00:10:14.267 "superblock": false, 00:10:14.267 "num_base_bdevs": 4, 00:10:14.267 "num_base_bdevs_discovered": 4, 00:10:14.267 "num_base_bdevs_operational": 4, 00:10:14.267 "base_bdevs_list": [ 00:10:14.267 { 00:10:14.267 "name": "BaseBdev1", 00:10:14.267 "uuid": "a1567ff7-cda7-46df-a232-f1290fd252d1", 00:10:14.267 "is_configured": true, 00:10:14.267 "data_offset": 0, 00:10:14.267 "data_size": 65536 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "name": "BaseBdev2", 00:10:14.267 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:14.267 "is_configured": true, 00:10:14.267 "data_offset": 0, 00:10:14.267 "data_size": 65536 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "name": "BaseBdev3", 00:10:14.267 "uuid": "7648c83c-c64f-4808-ac11-035270e9e0b0", 00:10:14.267 "is_configured": true, 00:10:14.267 "data_offset": 0, 00:10:14.267 "data_size": 65536 00:10:14.267 }, 00:10:14.267 { 00:10:14.267 "name": "BaseBdev4", 00:10:14.267 "uuid": "7a7a2d2d-ca86-4c8b-997b-0185808746d9", 00:10:14.267 "is_configured": true, 00:10:14.267 "data_offset": 0, 00:10:14.267 "data_size": 65536 00:10:14.267 } 00:10:14.267 ] 00:10:14.267 } 00:10:14.267 } 00:10:14.267 }' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:14.267 BaseBdev2 00:10:14.267 BaseBdev3 00:10:14.267 BaseBdev4' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.267 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.528 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.529 [2024-11-26 22:54:53.486008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.529 [2024-11-26 22:54:53.486050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.529 [2024-11-26 22:54:53.486137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.529 "name": "Existed_Raid", 00:10:14.529 "uuid": "03ff35a2-1152-4195-9fce-2e911eb11208", 00:10:14.529 "strip_size_kb": 64, 00:10:14.529 "state": "offline", 00:10:14.529 "raid_level": "concat", 00:10:14.529 "superblock": false, 00:10:14.529 "num_base_bdevs": 4, 00:10:14.529 "num_base_bdevs_discovered": 3, 00:10:14.529 "num_base_bdevs_operational": 3, 00:10:14.529 "base_bdevs_list": [ 00:10:14.529 { 00:10:14.529 "name": null, 00:10:14.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.529 "is_configured": false, 00:10:14.529 "data_offset": 0, 00:10:14.529 "data_size": 65536 00:10:14.529 }, 00:10:14.529 { 00:10:14.529 "name": "BaseBdev2", 00:10:14.529 "uuid": "bfdf97dc-85dd-4220-98d3-2688a94be3ab", 00:10:14.529 "is_configured": true, 00:10:14.529 "data_offset": 0, 00:10:14.529 "data_size": 65536 00:10:14.529 }, 00:10:14.529 { 00:10:14.529 "name": "BaseBdev3", 00:10:14.529 "uuid": "7648c83c-c64f-4808-ac11-035270e9e0b0", 00:10:14.529 "is_configured": true, 00:10:14.529 "data_offset": 0, 00:10:14.529 "data_size": 65536 00:10:14.529 }, 00:10:14.529 { 00:10:14.529 "name": "BaseBdev4", 00:10:14.529 "uuid": "7a7a2d2d-ca86-4c8b-997b-0185808746d9", 00:10:14.529 "is_configured": true, 00:10:14.529 "data_offset": 0, 00:10:14.529 "data_size": 65536 00:10:14.529 } 00:10:14.529 ] 00:10:14.529 }' 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.529 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 [2024-11-26 22:54:53.999232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 [2024-11-26 22:54:54.064085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 [2024-11-26 22:54:54.143964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:15.099 [2024-11-26 22:54:54.144052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.099 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.360 BaseBdev2 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [ 00:10:15.361 { 00:10:15.361 "name": "BaseBdev2", 00:10:15.361 "aliases": [ 00:10:15.361 "5e3769bd-db11-47cf-b4f9-1bdf40815c1c" 00:10:15.361 ], 00:10:15.361 "product_name": "Malloc disk", 00:10:15.361 "block_size": 512, 00:10:15.361 "num_blocks": 65536, 00:10:15.361 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:15.361 "assigned_rate_limits": { 00:10:15.361 "rw_ios_per_sec": 0, 00:10:15.361 "rw_mbytes_per_sec": 0, 00:10:15.361 "r_mbytes_per_sec": 0, 00:10:15.361 "w_mbytes_per_sec": 0 00:10:15.361 }, 00:10:15.361 "claimed": false, 00:10:15.361 "zoned": false, 00:10:15.361 "supported_io_types": { 00:10:15.361 "read": true, 00:10:15.361 "write": true, 00:10:15.361 "unmap": true, 00:10:15.361 "flush": true, 00:10:15.361 "reset": true, 00:10:15.361 "nvme_admin": false, 00:10:15.361 "nvme_io": false, 00:10:15.361 "nvme_io_md": false, 00:10:15.361 "write_zeroes": true, 00:10:15.361 "zcopy": true, 00:10:15.361 "get_zone_info": false, 00:10:15.361 "zone_management": false, 00:10:15.361 "zone_append": false, 00:10:15.361 "compare": false, 00:10:15.361 "compare_and_write": false, 00:10:15.361 "abort": true, 00:10:15.361 "seek_hole": false, 00:10:15.361 "seek_data": false, 00:10:15.361 "copy": true, 00:10:15.361 "nvme_iov_md": false 00:10:15.361 }, 00:10:15.361 "memory_domains": [ 00:10:15.361 { 00:10:15.361 "dma_device_id": "system", 00:10:15.361 "dma_device_type": 1 00:10:15.361 }, 00:10:15.361 { 00:10:15.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.361 "dma_device_type": 2 00:10:15.361 } 00:10:15.361 ], 00:10:15.361 "driver_specific": {} 00:10:15.361 } 00:10:15.361 ] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 BaseBdev3 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [ 00:10:15.361 { 00:10:15.361 "name": "BaseBdev3", 00:10:15.361 "aliases": [ 00:10:15.361 "f23ba750-df6d-4fea-a4ed-fb49a516cdf9" 00:10:15.361 ], 00:10:15.361 "product_name": "Malloc disk", 00:10:15.361 "block_size": 512, 00:10:15.361 "num_blocks": 65536, 00:10:15.361 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:15.361 "assigned_rate_limits": { 00:10:15.361 "rw_ios_per_sec": 0, 00:10:15.361 "rw_mbytes_per_sec": 0, 00:10:15.361 "r_mbytes_per_sec": 0, 00:10:15.361 "w_mbytes_per_sec": 0 00:10:15.361 }, 00:10:15.361 "claimed": false, 00:10:15.361 "zoned": false, 00:10:15.361 "supported_io_types": { 00:10:15.361 "read": true, 00:10:15.361 "write": true, 00:10:15.361 "unmap": true, 00:10:15.361 "flush": true, 00:10:15.361 "reset": true, 00:10:15.361 "nvme_admin": false, 00:10:15.361 "nvme_io": false, 00:10:15.361 "nvme_io_md": false, 00:10:15.361 "write_zeroes": true, 00:10:15.361 "zcopy": true, 00:10:15.361 "get_zone_info": false, 00:10:15.361 "zone_management": false, 00:10:15.361 "zone_append": false, 00:10:15.361 "compare": false, 00:10:15.361 "compare_and_write": false, 00:10:15.361 "abort": true, 00:10:15.361 "seek_hole": false, 00:10:15.361 "seek_data": false, 00:10:15.361 "copy": true, 00:10:15.361 "nvme_iov_md": false 00:10:15.361 }, 00:10:15.361 "memory_domains": [ 00:10:15.361 { 00:10:15.361 "dma_device_id": "system", 00:10:15.361 "dma_device_type": 1 00:10:15.361 }, 00:10:15.361 { 00:10:15.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.361 "dma_device_type": 2 00:10:15.361 } 00:10:15.361 ], 00:10:15.361 "driver_specific": {} 00:10:15.361 } 00:10:15.361 ] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 BaseBdev4 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [ 00:10:15.361 { 00:10:15.361 "name": "BaseBdev4", 00:10:15.361 "aliases": [ 00:10:15.361 "1019803f-2a13-48dc-b09f-030e406c983b" 00:10:15.361 ], 00:10:15.361 "product_name": "Malloc disk", 00:10:15.361 "block_size": 512, 00:10:15.361 "num_blocks": 65536, 00:10:15.361 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:15.361 "assigned_rate_limits": { 00:10:15.361 "rw_ios_per_sec": 0, 00:10:15.361 "rw_mbytes_per_sec": 0, 00:10:15.361 "r_mbytes_per_sec": 0, 00:10:15.361 "w_mbytes_per_sec": 0 00:10:15.361 }, 00:10:15.361 "claimed": false, 00:10:15.361 "zoned": false, 00:10:15.361 "supported_io_types": { 00:10:15.361 "read": true, 00:10:15.361 "write": true, 00:10:15.361 "unmap": true, 00:10:15.361 "flush": true, 00:10:15.361 "reset": true, 00:10:15.361 "nvme_admin": false, 00:10:15.361 "nvme_io": false, 00:10:15.361 "nvme_io_md": false, 00:10:15.361 "write_zeroes": true, 00:10:15.361 "zcopy": true, 00:10:15.361 "get_zone_info": false, 00:10:15.361 "zone_management": false, 00:10:15.361 "zone_append": false, 00:10:15.361 "compare": false, 00:10:15.361 "compare_and_write": false, 00:10:15.361 "abort": true, 00:10:15.361 "seek_hole": false, 00:10:15.361 "seek_data": false, 00:10:15.361 "copy": true, 00:10:15.361 "nvme_iov_md": false 00:10:15.361 }, 00:10:15.361 "memory_domains": [ 00:10:15.361 { 00:10:15.361 "dma_device_id": "system", 00:10:15.361 "dma_device_type": 1 00:10:15.361 }, 00:10:15.361 { 00:10:15.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.361 "dma_device_type": 2 00:10:15.361 } 00:10:15.361 ], 00:10:15.361 "driver_specific": {} 00:10:15.361 } 00:10:15.361 ] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [2024-11-26 22:54:54.389355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.361 [2024-11-26 22:54:54.389461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.361 [2024-11-26 22:54:54.389507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.361 [2024-11-26 22:54:54.391710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.361 [2024-11-26 22:54:54.391821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.361 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.362 "name": "Existed_Raid", 00:10:15.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.362 "strip_size_kb": 64, 00:10:15.362 "state": "configuring", 00:10:15.362 "raid_level": "concat", 00:10:15.362 "superblock": false, 00:10:15.362 "num_base_bdevs": 4, 00:10:15.362 "num_base_bdevs_discovered": 3, 00:10:15.362 "num_base_bdevs_operational": 4, 00:10:15.362 "base_bdevs_list": [ 00:10:15.362 { 00:10:15.362 "name": "BaseBdev1", 00:10:15.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.362 "is_configured": false, 00:10:15.362 "data_offset": 0, 00:10:15.362 "data_size": 0 00:10:15.362 }, 00:10:15.362 { 00:10:15.362 "name": "BaseBdev2", 00:10:15.362 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:15.362 "is_configured": true, 00:10:15.362 "data_offset": 0, 00:10:15.362 "data_size": 65536 00:10:15.362 }, 00:10:15.362 { 00:10:15.362 "name": "BaseBdev3", 00:10:15.362 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:15.362 "is_configured": true, 00:10:15.362 "data_offset": 0, 00:10:15.362 "data_size": 65536 00:10:15.362 }, 00:10:15.362 { 00:10:15.362 "name": "BaseBdev4", 00:10:15.362 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:15.362 "is_configured": true, 00:10:15.362 "data_offset": 0, 00:10:15.362 "data_size": 65536 00:10:15.362 } 00:10:15.362 ] 00:10:15.362 }' 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.362 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 [2024-11-26 22:54:54.793415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.933 "name": "Existed_Raid", 00:10:15.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.933 "strip_size_kb": 64, 00:10:15.933 "state": "configuring", 00:10:15.933 "raid_level": "concat", 00:10:15.933 "superblock": false, 00:10:15.933 "num_base_bdevs": 4, 00:10:15.933 "num_base_bdevs_discovered": 2, 00:10:15.933 "num_base_bdevs_operational": 4, 00:10:15.933 "base_bdevs_list": [ 00:10:15.933 { 00:10:15.933 "name": "BaseBdev1", 00:10:15.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.933 "is_configured": false, 00:10:15.933 "data_offset": 0, 00:10:15.933 "data_size": 0 00:10:15.933 }, 00:10:15.933 { 00:10:15.933 "name": null, 00:10:15.933 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:15.933 "is_configured": false, 00:10:15.933 "data_offset": 0, 00:10:15.933 "data_size": 65536 00:10:15.933 }, 00:10:15.933 { 00:10:15.933 "name": "BaseBdev3", 00:10:15.933 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:15.933 "is_configured": true, 00:10:15.933 "data_offset": 0, 00:10:15.933 "data_size": 65536 00:10:15.933 }, 00:10:15.933 { 00:10:15.933 "name": "BaseBdev4", 00:10:15.933 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:15.933 "is_configured": true, 00:10:15.933 "data_offset": 0, 00:10:15.933 "data_size": 65536 00:10:15.933 } 00:10:15.933 ] 00:10:15.933 }' 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.933 22:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 [2024-11-26 22:54:55.238355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.193 BaseBdev1 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 [ 00:10:16.193 { 00:10:16.193 "name": "BaseBdev1", 00:10:16.193 "aliases": [ 00:10:16.193 "4dda09c6-28b8-4174-b956-814c839c3146" 00:10:16.193 ], 00:10:16.193 "product_name": "Malloc disk", 00:10:16.193 "block_size": 512, 00:10:16.193 "num_blocks": 65536, 00:10:16.193 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:16.193 "assigned_rate_limits": { 00:10:16.193 "rw_ios_per_sec": 0, 00:10:16.193 "rw_mbytes_per_sec": 0, 00:10:16.193 "r_mbytes_per_sec": 0, 00:10:16.193 "w_mbytes_per_sec": 0 00:10:16.193 }, 00:10:16.193 "claimed": true, 00:10:16.193 "claim_type": "exclusive_write", 00:10:16.193 "zoned": false, 00:10:16.193 "supported_io_types": { 00:10:16.193 "read": true, 00:10:16.193 "write": true, 00:10:16.193 "unmap": true, 00:10:16.193 "flush": true, 00:10:16.193 "reset": true, 00:10:16.193 "nvme_admin": false, 00:10:16.193 "nvme_io": false, 00:10:16.193 "nvme_io_md": false, 00:10:16.193 "write_zeroes": true, 00:10:16.193 "zcopy": true, 00:10:16.193 "get_zone_info": false, 00:10:16.193 "zone_management": false, 00:10:16.193 "zone_append": false, 00:10:16.193 "compare": false, 00:10:16.193 "compare_and_write": false, 00:10:16.193 "abort": true, 00:10:16.193 "seek_hole": false, 00:10:16.193 "seek_data": false, 00:10:16.193 "copy": true, 00:10:16.193 "nvme_iov_md": false 00:10:16.193 }, 00:10:16.193 "memory_domains": [ 00:10:16.193 { 00:10:16.193 "dma_device_id": "system", 00:10:16.193 "dma_device_type": 1 00:10:16.193 }, 00:10:16.193 { 00:10:16.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.193 "dma_device_type": 2 00:10:16.193 } 00:10:16.193 ], 00:10:16.193 "driver_specific": {} 00:10:16.193 } 00:10:16.193 ] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.193 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.454 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.454 "name": "Existed_Raid", 00:10:16.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.454 "strip_size_kb": 64, 00:10:16.454 "state": "configuring", 00:10:16.454 "raid_level": "concat", 00:10:16.454 "superblock": false, 00:10:16.454 "num_base_bdevs": 4, 00:10:16.454 "num_base_bdevs_discovered": 3, 00:10:16.454 "num_base_bdevs_operational": 4, 00:10:16.454 "base_bdevs_list": [ 00:10:16.454 { 00:10:16.454 "name": "BaseBdev1", 00:10:16.454 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:16.454 "is_configured": true, 00:10:16.454 "data_offset": 0, 00:10:16.454 "data_size": 65536 00:10:16.454 }, 00:10:16.454 { 00:10:16.454 "name": null, 00:10:16.454 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:16.454 "is_configured": false, 00:10:16.454 "data_offset": 0, 00:10:16.454 "data_size": 65536 00:10:16.454 }, 00:10:16.454 { 00:10:16.454 "name": "BaseBdev3", 00:10:16.454 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:16.454 "is_configured": true, 00:10:16.454 "data_offset": 0, 00:10:16.454 "data_size": 65536 00:10:16.454 }, 00:10:16.454 { 00:10:16.454 "name": "BaseBdev4", 00:10:16.454 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:16.454 "is_configured": true, 00:10:16.454 "data_offset": 0, 00:10:16.454 "data_size": 65536 00:10:16.454 } 00:10:16.454 ] 00:10:16.454 }' 00:10:16.454 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.454 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.713 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 [2024-11-26 22:54:55.826533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.714 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.973 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.973 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.973 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.973 "name": "Existed_Raid", 00:10:16.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.973 "strip_size_kb": 64, 00:10:16.973 "state": "configuring", 00:10:16.973 "raid_level": "concat", 00:10:16.973 "superblock": false, 00:10:16.973 "num_base_bdevs": 4, 00:10:16.973 "num_base_bdevs_discovered": 2, 00:10:16.973 "num_base_bdevs_operational": 4, 00:10:16.973 "base_bdevs_list": [ 00:10:16.973 { 00:10:16.973 "name": "BaseBdev1", 00:10:16.973 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:16.973 "is_configured": true, 00:10:16.973 "data_offset": 0, 00:10:16.973 "data_size": 65536 00:10:16.973 }, 00:10:16.973 { 00:10:16.973 "name": null, 00:10:16.973 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:16.973 "is_configured": false, 00:10:16.973 "data_offset": 0, 00:10:16.973 "data_size": 65536 00:10:16.973 }, 00:10:16.973 { 00:10:16.973 "name": null, 00:10:16.973 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:16.973 "is_configured": false, 00:10:16.973 "data_offset": 0, 00:10:16.973 "data_size": 65536 00:10:16.973 }, 00:10:16.973 { 00:10:16.973 "name": "BaseBdev4", 00:10:16.973 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:16.973 "is_configured": true, 00:10:16.973 "data_offset": 0, 00:10:16.973 "data_size": 65536 00:10:16.973 } 00:10:16.973 ] 00:10:16.973 }' 00:10:16.973 22:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.973 22:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 [2024-11-26 22:54:56.314738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.494 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.494 "name": "Existed_Raid", 00:10:17.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.494 "strip_size_kb": 64, 00:10:17.494 "state": "configuring", 00:10:17.494 "raid_level": "concat", 00:10:17.494 "superblock": false, 00:10:17.494 "num_base_bdevs": 4, 00:10:17.494 "num_base_bdevs_discovered": 3, 00:10:17.494 "num_base_bdevs_operational": 4, 00:10:17.494 "base_bdevs_list": [ 00:10:17.494 { 00:10:17.494 "name": "BaseBdev1", 00:10:17.494 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:17.494 "is_configured": true, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": null, 00:10:17.494 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:17.494 "is_configured": false, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": "BaseBdev3", 00:10:17.494 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:17.494 "is_configured": true, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": "BaseBdev4", 00:10:17.494 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:17.494 "is_configured": true, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 } 00:10:17.494 ] 00:10:17.494 }' 00:10:17.494 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.494 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 [2024-11-26 22:54:56.782878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.755 "name": "Existed_Raid", 00:10:17.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.755 "strip_size_kb": 64, 00:10:17.755 "state": "configuring", 00:10:17.755 "raid_level": "concat", 00:10:17.755 "superblock": false, 00:10:17.755 "num_base_bdevs": 4, 00:10:17.755 "num_base_bdevs_discovered": 2, 00:10:17.755 "num_base_bdevs_operational": 4, 00:10:17.755 "base_bdevs_list": [ 00:10:17.755 { 00:10:17.755 "name": null, 00:10:17.755 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:17.755 "is_configured": false, 00:10:17.755 "data_offset": 0, 00:10:17.755 "data_size": 65536 00:10:17.755 }, 00:10:17.755 { 00:10:17.755 "name": null, 00:10:17.755 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:17.755 "is_configured": false, 00:10:17.755 "data_offset": 0, 00:10:17.755 "data_size": 65536 00:10:17.755 }, 00:10:17.755 { 00:10:17.755 "name": "BaseBdev3", 00:10:17.755 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:17.755 "is_configured": true, 00:10:17.755 "data_offset": 0, 00:10:17.755 "data_size": 65536 00:10:17.755 }, 00:10:17.755 { 00:10:17.755 "name": "BaseBdev4", 00:10:17.755 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:17.755 "is_configured": true, 00:10:17.755 "data_offset": 0, 00:10:17.755 "data_size": 65536 00:10:17.755 } 00:10:17.755 ] 00:10:17.755 }' 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.755 22:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.324 [2024-11-26 22:54:57.318956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.324 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.324 "name": "Existed_Raid", 00:10:18.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.324 "strip_size_kb": 64, 00:10:18.324 "state": "configuring", 00:10:18.324 "raid_level": "concat", 00:10:18.324 "superblock": false, 00:10:18.324 "num_base_bdevs": 4, 00:10:18.324 "num_base_bdevs_discovered": 3, 00:10:18.324 "num_base_bdevs_operational": 4, 00:10:18.324 "base_bdevs_list": [ 00:10:18.324 { 00:10:18.324 "name": null, 00:10:18.324 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:18.324 "is_configured": false, 00:10:18.324 "data_offset": 0, 00:10:18.324 "data_size": 65536 00:10:18.324 }, 00:10:18.324 { 00:10:18.324 "name": "BaseBdev2", 00:10:18.324 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:18.324 "is_configured": true, 00:10:18.325 "data_offset": 0, 00:10:18.325 "data_size": 65536 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "name": "BaseBdev3", 00:10:18.325 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:18.325 "is_configured": true, 00:10:18.325 "data_offset": 0, 00:10:18.325 "data_size": 65536 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "name": "BaseBdev4", 00:10:18.325 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:18.325 "is_configured": true, 00:10:18.325 "data_offset": 0, 00:10:18.325 "data_size": 65536 00:10:18.325 } 00:10:18.325 ] 00:10:18.325 }' 00:10:18.325 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.325 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4dda09c6-28b8-4174-b956-814c839c3146 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 [2024-11-26 22:54:57.827981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.895 [2024-11-26 22:54:57.828107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.895 [2024-11-26 22:54:57.828141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:18.895 [2024-11-26 22:54:57.828479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:18.895 [2024-11-26 22:54:57.828682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.895 [2024-11-26 22:54:57.828727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:18.895 [2024-11-26 22:54:57.828997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.895 NewBaseBdev 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 [ 00:10:18.895 { 00:10:18.895 "name": "NewBaseBdev", 00:10:18.895 "aliases": [ 00:10:18.895 "4dda09c6-28b8-4174-b956-814c839c3146" 00:10:18.895 ], 00:10:18.895 "product_name": "Malloc disk", 00:10:18.895 "block_size": 512, 00:10:18.895 "num_blocks": 65536, 00:10:18.895 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:18.895 "assigned_rate_limits": { 00:10:18.895 "rw_ios_per_sec": 0, 00:10:18.895 "rw_mbytes_per_sec": 0, 00:10:18.895 "r_mbytes_per_sec": 0, 00:10:18.895 "w_mbytes_per_sec": 0 00:10:18.895 }, 00:10:18.895 "claimed": true, 00:10:18.895 "claim_type": "exclusive_write", 00:10:18.895 "zoned": false, 00:10:18.895 "supported_io_types": { 00:10:18.895 "read": true, 00:10:18.895 "write": true, 00:10:18.895 "unmap": true, 00:10:18.895 "flush": true, 00:10:18.895 "reset": true, 00:10:18.895 "nvme_admin": false, 00:10:18.895 "nvme_io": false, 00:10:18.895 "nvme_io_md": false, 00:10:18.895 "write_zeroes": true, 00:10:18.895 "zcopy": true, 00:10:18.895 "get_zone_info": false, 00:10:18.895 "zone_management": false, 00:10:18.895 "zone_append": false, 00:10:18.895 "compare": false, 00:10:18.895 "compare_and_write": false, 00:10:18.895 "abort": true, 00:10:18.895 "seek_hole": false, 00:10:18.895 "seek_data": false, 00:10:18.895 "copy": true, 00:10:18.895 "nvme_iov_md": false 00:10:18.895 }, 00:10:18.895 "memory_domains": [ 00:10:18.895 { 00:10:18.895 "dma_device_id": "system", 00:10:18.895 "dma_device_type": 1 00:10:18.895 }, 00:10:18.895 { 00:10:18.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.895 "dma_device_type": 2 00:10:18.895 } 00:10:18.895 ], 00:10:18.895 "driver_specific": {} 00:10:18.895 } 00:10:18.895 ] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.895 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.895 "name": "Existed_Raid", 00:10:18.895 "uuid": "8f97b51c-73e2-4a43-8a4a-1ea0ec0deea9", 00:10:18.895 "strip_size_kb": 64, 00:10:18.895 "state": "online", 00:10:18.895 "raid_level": "concat", 00:10:18.895 "superblock": false, 00:10:18.895 "num_base_bdevs": 4, 00:10:18.895 "num_base_bdevs_discovered": 4, 00:10:18.895 "num_base_bdevs_operational": 4, 00:10:18.895 "base_bdevs_list": [ 00:10:18.895 { 00:10:18.895 "name": "NewBaseBdev", 00:10:18.895 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:18.895 "is_configured": true, 00:10:18.895 "data_offset": 0, 00:10:18.895 "data_size": 65536 00:10:18.895 }, 00:10:18.895 { 00:10:18.895 "name": "BaseBdev2", 00:10:18.895 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:18.895 "is_configured": true, 00:10:18.895 "data_offset": 0, 00:10:18.895 "data_size": 65536 00:10:18.895 }, 00:10:18.895 { 00:10:18.895 "name": "BaseBdev3", 00:10:18.895 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:18.895 "is_configured": true, 00:10:18.895 "data_offset": 0, 00:10:18.895 "data_size": 65536 00:10:18.895 }, 00:10:18.895 { 00:10:18.895 "name": "BaseBdev4", 00:10:18.895 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:18.896 "is_configured": true, 00:10:18.896 "data_offset": 0, 00:10:18.896 "data_size": 65536 00:10:18.896 } 00:10:18.896 ] 00:10:18.896 }' 00:10:18.896 22:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.896 22:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.156 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.416 [2024-11-26 22:54:58.288424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.416 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.416 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.416 "name": "Existed_Raid", 00:10:19.416 "aliases": [ 00:10:19.416 "8f97b51c-73e2-4a43-8a4a-1ea0ec0deea9" 00:10:19.416 ], 00:10:19.416 "product_name": "Raid Volume", 00:10:19.416 "block_size": 512, 00:10:19.416 "num_blocks": 262144, 00:10:19.416 "uuid": "8f97b51c-73e2-4a43-8a4a-1ea0ec0deea9", 00:10:19.416 "assigned_rate_limits": { 00:10:19.416 "rw_ios_per_sec": 0, 00:10:19.416 "rw_mbytes_per_sec": 0, 00:10:19.416 "r_mbytes_per_sec": 0, 00:10:19.416 "w_mbytes_per_sec": 0 00:10:19.416 }, 00:10:19.416 "claimed": false, 00:10:19.416 "zoned": false, 00:10:19.416 "supported_io_types": { 00:10:19.416 "read": true, 00:10:19.416 "write": true, 00:10:19.416 "unmap": true, 00:10:19.416 "flush": true, 00:10:19.416 "reset": true, 00:10:19.416 "nvme_admin": false, 00:10:19.416 "nvme_io": false, 00:10:19.416 "nvme_io_md": false, 00:10:19.416 "write_zeroes": true, 00:10:19.416 "zcopy": false, 00:10:19.416 "get_zone_info": false, 00:10:19.416 "zone_management": false, 00:10:19.416 "zone_append": false, 00:10:19.416 "compare": false, 00:10:19.416 "compare_and_write": false, 00:10:19.416 "abort": false, 00:10:19.416 "seek_hole": false, 00:10:19.416 "seek_data": false, 00:10:19.416 "copy": false, 00:10:19.416 "nvme_iov_md": false 00:10:19.416 }, 00:10:19.416 "memory_domains": [ 00:10:19.416 { 00:10:19.416 "dma_device_id": "system", 00:10:19.416 "dma_device_type": 1 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.417 "dma_device_type": 2 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "system", 00:10:19.417 "dma_device_type": 1 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.417 "dma_device_type": 2 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "system", 00:10:19.417 "dma_device_type": 1 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.417 "dma_device_type": 2 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "system", 00:10:19.417 "dma_device_type": 1 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.417 "dma_device_type": 2 00:10:19.417 } 00:10:19.417 ], 00:10:19.417 "driver_specific": { 00:10:19.417 "raid": { 00:10:19.417 "uuid": "8f97b51c-73e2-4a43-8a4a-1ea0ec0deea9", 00:10:19.417 "strip_size_kb": 64, 00:10:19.417 "state": "online", 00:10:19.417 "raid_level": "concat", 00:10:19.417 "superblock": false, 00:10:19.417 "num_base_bdevs": 4, 00:10:19.417 "num_base_bdevs_discovered": 4, 00:10:19.417 "num_base_bdevs_operational": 4, 00:10:19.417 "base_bdevs_list": [ 00:10:19.417 { 00:10:19.417 "name": "NewBaseBdev", 00:10:19.417 "uuid": "4dda09c6-28b8-4174-b956-814c839c3146", 00:10:19.417 "is_configured": true, 00:10:19.417 "data_offset": 0, 00:10:19.417 "data_size": 65536 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "name": "BaseBdev2", 00:10:19.417 "uuid": "5e3769bd-db11-47cf-b4f9-1bdf40815c1c", 00:10:19.417 "is_configured": true, 00:10:19.417 "data_offset": 0, 00:10:19.417 "data_size": 65536 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "name": "BaseBdev3", 00:10:19.417 "uuid": "f23ba750-df6d-4fea-a4ed-fb49a516cdf9", 00:10:19.417 "is_configured": true, 00:10:19.417 "data_offset": 0, 00:10:19.417 "data_size": 65536 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "name": "BaseBdev4", 00:10:19.417 "uuid": "1019803f-2a13-48dc-b09f-030e406c983b", 00:10:19.417 "is_configured": true, 00:10:19.417 "data_offset": 0, 00:10:19.417 "data_size": 65536 00:10:19.417 } 00:10:19.417 ] 00:10:19.417 } 00:10:19.417 } 00:10:19.417 }' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.417 BaseBdev2 00:10:19.417 BaseBdev3 00:10:19.417 BaseBdev4' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.676 [2024-11-26 22:54:58.604180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.676 [2024-11-26 22:54:58.604283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.676 [2024-11-26 22:54:58.604393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.676 [2024-11-26 22:54:58.604496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.676 [2024-11-26 22:54:58.604560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83771 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83771 ']' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83771 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83771 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.676 killing process with pid 83771 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83771' 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83771 00:10:19.676 [2024-11-26 22:54:58.646367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.676 22:54:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83771 00:10:19.676 [2024-11-26 22:54:58.725385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.935 22:54:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.935 00:10:19.935 real 0m9.817s 00:10:19.935 user 0m16.434s 00:10:19.935 sys 0m2.208s 00:10:19.935 ************************************ 00:10:19.935 END TEST raid_state_function_test 00:10:19.935 ************************************ 00:10:19.935 22:54:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.935 22:54:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.195 22:54:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:20.195 22:54:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.195 22:54:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.195 22:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.195 ************************************ 00:10:20.195 START TEST raid_state_function_test_sb 00:10:20.195 ************************************ 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:20.195 Process raid pid: 84426 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84426 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84426' 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84426 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84426 ']' 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.195 22:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.195 [2024-11-26 22:54:59.238779] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:20.195 [2024-11-26 22:54:59.238970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.454 [2024-11-26 22:54:59.375106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:20.454 [2024-11-26 22:54:59.413116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.454 [2024-11-26 22:54:59.453087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.454 [2024-11-26 22:54:59.528903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.454 [2024-11-26 22:54:59.529034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 [2024-11-26 22:55:00.076364] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.023 [2024-11-26 22:55:00.076473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.023 [2024-11-26 22:55:00.076540] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.023 [2024-11-26 22:55:00.076568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.023 [2024-11-26 22:55:00.076614] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.023 [2024-11-26 22:55:00.076639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.023 [2024-11-26 22:55:00.076664] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:21.023 [2024-11-26 22:55:00.076712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.023 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.023 "name": "Existed_Raid", 00:10:21.023 "uuid": "c93393bb-836c-4c88-b274-e9a1e3f79bd5", 00:10:21.023 "strip_size_kb": 64, 00:10:21.023 "state": "configuring", 00:10:21.023 "raid_level": "concat", 00:10:21.023 "superblock": true, 00:10:21.023 "num_base_bdevs": 4, 00:10:21.023 "num_base_bdevs_discovered": 0, 00:10:21.023 "num_base_bdevs_operational": 4, 00:10:21.024 "base_bdevs_list": [ 00:10:21.024 { 00:10:21.024 "name": "BaseBdev1", 00:10:21.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.024 "is_configured": false, 00:10:21.024 "data_offset": 0, 00:10:21.024 "data_size": 0 00:10:21.024 }, 00:10:21.024 { 00:10:21.024 "name": "BaseBdev2", 00:10:21.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.024 "is_configured": false, 00:10:21.024 "data_offset": 0, 00:10:21.024 "data_size": 0 00:10:21.024 }, 00:10:21.024 { 00:10:21.024 "name": "BaseBdev3", 00:10:21.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.024 "is_configured": false, 00:10:21.024 "data_offset": 0, 00:10:21.024 "data_size": 0 00:10:21.024 }, 00:10:21.024 { 00:10:21.024 "name": "BaseBdev4", 00:10:21.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.024 "is_configured": false, 00:10:21.024 "data_offset": 0, 00:10:21.024 "data_size": 0 00:10:21.024 } 00:10:21.024 ] 00:10:21.024 }' 00:10:21.024 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.024 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 [2024-11-26 22:55:00.528326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.593 [2024-11-26 22:55:00.528413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 [2024-11-26 22:55:00.540379] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.593 [2024-11-26 22:55:00.540461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.593 [2024-11-26 22:55:00.540496] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.593 [2024-11-26 22:55:00.540521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.593 [2024-11-26 22:55:00.540546] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.593 [2024-11-26 22:55:00.540570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.593 [2024-11-26 22:55:00.540629] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:21.593 [2024-11-26 22:55:00.540640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 [2024-11-26 22:55:00.567293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.593 BaseBdev1 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.593 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.593 [ 00:10:21.593 { 00:10:21.593 "name": "BaseBdev1", 00:10:21.593 "aliases": [ 00:10:21.593 "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7" 00:10:21.594 ], 00:10:21.594 "product_name": "Malloc disk", 00:10:21.594 "block_size": 512, 00:10:21.594 "num_blocks": 65536, 00:10:21.594 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:21.594 "assigned_rate_limits": { 00:10:21.594 "rw_ios_per_sec": 0, 00:10:21.594 "rw_mbytes_per_sec": 0, 00:10:21.594 "r_mbytes_per_sec": 0, 00:10:21.594 "w_mbytes_per_sec": 0 00:10:21.594 }, 00:10:21.594 "claimed": true, 00:10:21.594 "claim_type": "exclusive_write", 00:10:21.594 "zoned": false, 00:10:21.594 "supported_io_types": { 00:10:21.594 "read": true, 00:10:21.594 "write": true, 00:10:21.594 "unmap": true, 00:10:21.594 "flush": true, 00:10:21.594 "reset": true, 00:10:21.594 "nvme_admin": false, 00:10:21.594 "nvme_io": false, 00:10:21.594 "nvme_io_md": false, 00:10:21.594 "write_zeroes": true, 00:10:21.594 "zcopy": true, 00:10:21.594 "get_zone_info": false, 00:10:21.594 "zone_management": false, 00:10:21.594 "zone_append": false, 00:10:21.594 "compare": false, 00:10:21.594 "compare_and_write": false, 00:10:21.594 "abort": true, 00:10:21.594 "seek_hole": false, 00:10:21.594 "seek_data": false, 00:10:21.594 "copy": true, 00:10:21.594 "nvme_iov_md": false 00:10:21.594 }, 00:10:21.594 "memory_domains": [ 00:10:21.594 { 00:10:21.594 "dma_device_id": "system", 00:10:21.594 "dma_device_type": 1 00:10:21.594 }, 00:10:21.594 { 00:10:21.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.594 "dma_device_type": 2 00:10:21.594 } 00:10:21.594 ], 00:10:21.594 "driver_specific": {} 00:10:21.594 } 00:10:21.594 ] 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.594 "name": "Existed_Raid", 00:10:21.594 "uuid": "b175222e-c601-44f3-871f-22d4fa7d56cb", 00:10:21.594 "strip_size_kb": 64, 00:10:21.594 "state": "configuring", 00:10:21.594 "raid_level": "concat", 00:10:21.594 "superblock": true, 00:10:21.594 "num_base_bdevs": 4, 00:10:21.594 "num_base_bdevs_discovered": 1, 00:10:21.594 "num_base_bdevs_operational": 4, 00:10:21.594 "base_bdevs_list": [ 00:10:21.594 { 00:10:21.594 "name": "BaseBdev1", 00:10:21.594 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:21.594 "is_configured": true, 00:10:21.594 "data_offset": 2048, 00:10:21.594 "data_size": 63488 00:10:21.594 }, 00:10:21.594 { 00:10:21.594 "name": "BaseBdev2", 00:10:21.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.594 "is_configured": false, 00:10:21.594 "data_offset": 0, 00:10:21.594 "data_size": 0 00:10:21.594 }, 00:10:21.594 { 00:10:21.594 "name": "BaseBdev3", 00:10:21.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.594 "is_configured": false, 00:10:21.594 "data_offset": 0, 00:10:21.594 "data_size": 0 00:10:21.594 }, 00:10:21.594 { 00:10:21.594 "name": "BaseBdev4", 00:10:21.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.594 "is_configured": false, 00:10:21.594 "data_offset": 0, 00:10:21.594 "data_size": 0 00:10:21.594 } 00:10:21.594 ] 00:10:21.594 }' 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.594 22:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.162 [2024-11-26 22:55:01.063449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.162 [2024-11-26 22:55:01.063559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.162 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.162 [2024-11-26 22:55:01.075520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.162 [2024-11-26 22:55:01.077708] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.162 [2024-11-26 22:55:01.077804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.162 [2024-11-26 22:55:01.077839] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.162 [2024-11-26 22:55:01.077851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.163 [2024-11-26 22:55:01.077862] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.163 [2024-11-26 22:55:01.077870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.163 "name": "Existed_Raid", 00:10:22.163 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:22.163 "strip_size_kb": 64, 00:10:22.163 "state": "configuring", 00:10:22.163 "raid_level": "concat", 00:10:22.163 "superblock": true, 00:10:22.163 "num_base_bdevs": 4, 00:10:22.163 "num_base_bdevs_discovered": 1, 00:10:22.163 "num_base_bdevs_operational": 4, 00:10:22.163 "base_bdevs_list": [ 00:10:22.163 { 00:10:22.163 "name": "BaseBdev1", 00:10:22.163 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:22.163 "is_configured": true, 00:10:22.163 "data_offset": 2048, 00:10:22.163 "data_size": 63488 00:10:22.163 }, 00:10:22.163 { 00:10:22.163 "name": "BaseBdev2", 00:10:22.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.163 "is_configured": false, 00:10:22.163 "data_offset": 0, 00:10:22.163 "data_size": 0 00:10:22.163 }, 00:10:22.163 { 00:10:22.163 "name": "BaseBdev3", 00:10:22.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.163 "is_configured": false, 00:10:22.163 "data_offset": 0, 00:10:22.163 "data_size": 0 00:10:22.163 }, 00:10:22.163 { 00:10:22.163 "name": "BaseBdev4", 00:10:22.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.163 "is_configured": false, 00:10:22.163 "data_offset": 0, 00:10:22.163 "data_size": 0 00:10:22.163 } 00:10:22.163 ] 00:10:22.163 }' 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.163 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.422 [2024-11-26 22:55:01.520410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.422 BaseBdev2 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.422 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.684 [ 00:10:22.684 { 00:10:22.684 "name": "BaseBdev2", 00:10:22.684 "aliases": [ 00:10:22.684 "27ce4a53-9be7-4689-9dd1-8b19addaf9c4" 00:10:22.685 ], 00:10:22.685 "product_name": "Malloc disk", 00:10:22.685 "block_size": 512, 00:10:22.685 "num_blocks": 65536, 00:10:22.685 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:22.685 "assigned_rate_limits": { 00:10:22.685 "rw_ios_per_sec": 0, 00:10:22.685 "rw_mbytes_per_sec": 0, 00:10:22.685 "r_mbytes_per_sec": 0, 00:10:22.685 "w_mbytes_per_sec": 0 00:10:22.685 }, 00:10:22.685 "claimed": true, 00:10:22.685 "claim_type": "exclusive_write", 00:10:22.685 "zoned": false, 00:10:22.685 "supported_io_types": { 00:10:22.685 "read": true, 00:10:22.685 "write": true, 00:10:22.685 "unmap": true, 00:10:22.685 "flush": true, 00:10:22.685 "reset": true, 00:10:22.685 "nvme_admin": false, 00:10:22.685 "nvme_io": false, 00:10:22.685 "nvme_io_md": false, 00:10:22.685 "write_zeroes": true, 00:10:22.685 "zcopy": true, 00:10:22.685 "get_zone_info": false, 00:10:22.685 "zone_management": false, 00:10:22.685 "zone_append": false, 00:10:22.685 "compare": false, 00:10:22.685 "compare_and_write": false, 00:10:22.685 "abort": true, 00:10:22.685 "seek_hole": false, 00:10:22.685 "seek_data": false, 00:10:22.685 "copy": true, 00:10:22.685 "nvme_iov_md": false 00:10:22.685 }, 00:10:22.685 "memory_domains": [ 00:10:22.685 { 00:10:22.685 "dma_device_id": "system", 00:10:22.685 "dma_device_type": 1 00:10:22.685 }, 00:10:22.685 { 00:10:22.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.685 "dma_device_type": 2 00:10:22.685 } 00:10:22.685 ], 00:10:22.685 "driver_specific": {} 00:10:22.685 } 00:10:22.685 ] 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.685 "name": "Existed_Raid", 00:10:22.685 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:22.685 "strip_size_kb": 64, 00:10:22.685 "state": "configuring", 00:10:22.685 "raid_level": "concat", 00:10:22.685 "superblock": true, 00:10:22.685 "num_base_bdevs": 4, 00:10:22.685 "num_base_bdevs_discovered": 2, 00:10:22.685 "num_base_bdevs_operational": 4, 00:10:22.685 "base_bdevs_list": [ 00:10:22.685 { 00:10:22.685 "name": "BaseBdev1", 00:10:22.685 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:22.685 "is_configured": true, 00:10:22.685 "data_offset": 2048, 00:10:22.685 "data_size": 63488 00:10:22.685 }, 00:10:22.685 { 00:10:22.685 "name": "BaseBdev2", 00:10:22.685 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:22.685 "is_configured": true, 00:10:22.685 "data_offset": 2048, 00:10:22.685 "data_size": 63488 00:10:22.685 }, 00:10:22.685 { 00:10:22.685 "name": "BaseBdev3", 00:10:22.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.685 "is_configured": false, 00:10:22.685 "data_offset": 0, 00:10:22.685 "data_size": 0 00:10:22.685 }, 00:10:22.685 { 00:10:22.685 "name": "BaseBdev4", 00:10:22.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.685 "is_configured": false, 00:10:22.685 "data_offset": 0, 00:10:22.685 "data_size": 0 00:10:22.685 } 00:10:22.685 ] 00:10:22.685 }' 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.685 22:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.945 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.946 [2024-11-26 22:55:02.060428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.946 BaseBdev3 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.946 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.206 [ 00:10:23.206 { 00:10:23.206 "name": "BaseBdev3", 00:10:23.206 "aliases": [ 00:10:23.206 "d21164b1-0fdb-47d5-af67-b940165db4d1" 00:10:23.206 ], 00:10:23.206 "product_name": "Malloc disk", 00:10:23.206 "block_size": 512, 00:10:23.206 "num_blocks": 65536, 00:10:23.206 "uuid": "d21164b1-0fdb-47d5-af67-b940165db4d1", 00:10:23.206 "assigned_rate_limits": { 00:10:23.206 "rw_ios_per_sec": 0, 00:10:23.206 "rw_mbytes_per_sec": 0, 00:10:23.206 "r_mbytes_per_sec": 0, 00:10:23.206 "w_mbytes_per_sec": 0 00:10:23.206 }, 00:10:23.206 "claimed": true, 00:10:23.206 "claim_type": "exclusive_write", 00:10:23.206 "zoned": false, 00:10:23.206 "supported_io_types": { 00:10:23.206 "read": true, 00:10:23.206 "write": true, 00:10:23.206 "unmap": true, 00:10:23.206 "flush": true, 00:10:23.206 "reset": true, 00:10:23.206 "nvme_admin": false, 00:10:23.206 "nvme_io": false, 00:10:23.206 "nvme_io_md": false, 00:10:23.206 "write_zeroes": true, 00:10:23.206 "zcopy": true, 00:10:23.206 "get_zone_info": false, 00:10:23.206 "zone_management": false, 00:10:23.206 "zone_append": false, 00:10:23.206 "compare": false, 00:10:23.206 "compare_and_write": false, 00:10:23.206 "abort": true, 00:10:23.206 "seek_hole": false, 00:10:23.206 "seek_data": false, 00:10:23.206 "copy": true, 00:10:23.206 "nvme_iov_md": false 00:10:23.206 }, 00:10:23.206 "memory_domains": [ 00:10:23.206 { 00:10:23.206 "dma_device_id": "system", 00:10:23.206 "dma_device_type": 1 00:10:23.206 }, 00:10:23.206 { 00:10:23.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.206 "dma_device_type": 2 00:10:23.206 } 00:10:23.206 ], 00:10:23.206 "driver_specific": {} 00:10:23.206 } 00:10:23.206 ] 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.206 "name": "Existed_Raid", 00:10:23.206 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:23.206 "strip_size_kb": 64, 00:10:23.206 "state": "configuring", 00:10:23.206 "raid_level": "concat", 00:10:23.206 "superblock": true, 00:10:23.206 "num_base_bdevs": 4, 00:10:23.206 "num_base_bdevs_discovered": 3, 00:10:23.206 "num_base_bdevs_operational": 4, 00:10:23.206 "base_bdevs_list": [ 00:10:23.206 { 00:10:23.206 "name": "BaseBdev1", 00:10:23.206 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:23.206 "is_configured": true, 00:10:23.206 "data_offset": 2048, 00:10:23.206 "data_size": 63488 00:10:23.206 }, 00:10:23.206 { 00:10:23.206 "name": "BaseBdev2", 00:10:23.206 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:23.206 "is_configured": true, 00:10:23.206 "data_offset": 2048, 00:10:23.206 "data_size": 63488 00:10:23.206 }, 00:10:23.206 { 00:10:23.206 "name": "BaseBdev3", 00:10:23.206 "uuid": "d21164b1-0fdb-47d5-af67-b940165db4d1", 00:10:23.206 "is_configured": true, 00:10:23.206 "data_offset": 2048, 00:10:23.206 "data_size": 63488 00:10:23.206 }, 00:10:23.206 { 00:10:23.206 "name": "BaseBdev4", 00:10:23.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.206 "is_configured": false, 00:10:23.206 "data_offset": 0, 00:10:23.206 "data_size": 0 00:10:23.206 } 00:10:23.206 ] 00:10:23.206 }' 00:10:23.206 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.207 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.466 [2024-11-26 22:55:02.581568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.466 [2024-11-26 22:55:02.581922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:23.466 [2024-11-26 22:55:02.581989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:23.466 [2024-11-26 22:55:02.582414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:23.466 BaseBdev4 00:10:23.466 [2024-11-26 22:55:02.582634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:23.466 [2024-11-26 22:55:02.582656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:23.466 [2024-11-26 22:55:02.582844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.466 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.727 [ 00:10:23.727 { 00:10:23.727 "name": "BaseBdev4", 00:10:23.727 "aliases": [ 00:10:23.727 "87d20a33-fe25-48b1-b546-062148ac389b" 00:10:23.727 ], 00:10:23.727 "product_name": "Malloc disk", 00:10:23.727 "block_size": 512, 00:10:23.727 "num_blocks": 65536, 00:10:23.727 "uuid": "87d20a33-fe25-48b1-b546-062148ac389b", 00:10:23.727 "assigned_rate_limits": { 00:10:23.727 "rw_ios_per_sec": 0, 00:10:23.727 "rw_mbytes_per_sec": 0, 00:10:23.727 "r_mbytes_per_sec": 0, 00:10:23.727 "w_mbytes_per_sec": 0 00:10:23.727 }, 00:10:23.727 "claimed": true, 00:10:23.727 "claim_type": "exclusive_write", 00:10:23.727 "zoned": false, 00:10:23.727 "supported_io_types": { 00:10:23.727 "read": true, 00:10:23.727 "write": true, 00:10:23.727 "unmap": true, 00:10:23.727 "flush": true, 00:10:23.727 "reset": true, 00:10:23.727 "nvme_admin": false, 00:10:23.727 "nvme_io": false, 00:10:23.727 "nvme_io_md": false, 00:10:23.727 "write_zeroes": true, 00:10:23.727 "zcopy": true, 00:10:23.727 "get_zone_info": false, 00:10:23.727 "zone_management": false, 00:10:23.727 "zone_append": false, 00:10:23.727 "compare": false, 00:10:23.727 "compare_and_write": false, 00:10:23.727 "abort": true, 00:10:23.727 "seek_hole": false, 00:10:23.727 "seek_data": false, 00:10:23.727 "copy": true, 00:10:23.727 "nvme_iov_md": false 00:10:23.727 }, 00:10:23.727 "memory_domains": [ 00:10:23.727 { 00:10:23.727 "dma_device_id": "system", 00:10:23.727 "dma_device_type": 1 00:10:23.727 }, 00:10:23.727 { 00:10:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.727 "dma_device_type": 2 00:10:23.727 } 00:10:23.727 ], 00:10:23.727 "driver_specific": {} 00:10:23.727 } 00:10:23.727 ] 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.727 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.728 "name": "Existed_Raid", 00:10:23.728 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:23.728 "strip_size_kb": 64, 00:10:23.728 "state": "online", 00:10:23.728 "raid_level": "concat", 00:10:23.728 "superblock": true, 00:10:23.728 "num_base_bdevs": 4, 00:10:23.728 "num_base_bdevs_discovered": 4, 00:10:23.728 "num_base_bdevs_operational": 4, 00:10:23.728 "base_bdevs_list": [ 00:10:23.728 { 00:10:23.728 "name": "BaseBdev1", 00:10:23.728 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "name": "BaseBdev2", 00:10:23.728 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "name": "BaseBdev3", 00:10:23.728 "uuid": "d21164b1-0fdb-47d5-af67-b940165db4d1", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "name": "BaseBdev4", 00:10:23.728 "uuid": "87d20a33-fe25-48b1-b546-062148ac389b", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 } 00:10:23.728 ] 00:10:23.728 }' 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.728 22:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 [2024-11-26 22:55:03.066059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.988 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.988 "name": "Existed_Raid", 00:10:23.989 "aliases": [ 00:10:23.989 "4fe19830-52eb-44f3-9ced-75e985f43084" 00:10:23.989 ], 00:10:23.989 "product_name": "Raid Volume", 00:10:23.989 "block_size": 512, 00:10:23.989 "num_blocks": 253952, 00:10:23.989 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:23.989 "assigned_rate_limits": { 00:10:23.989 "rw_ios_per_sec": 0, 00:10:23.989 "rw_mbytes_per_sec": 0, 00:10:23.989 "r_mbytes_per_sec": 0, 00:10:23.989 "w_mbytes_per_sec": 0 00:10:23.989 }, 00:10:23.989 "claimed": false, 00:10:23.989 "zoned": false, 00:10:23.989 "supported_io_types": { 00:10:23.989 "read": true, 00:10:23.989 "write": true, 00:10:23.989 "unmap": true, 00:10:23.989 "flush": true, 00:10:23.989 "reset": true, 00:10:23.989 "nvme_admin": false, 00:10:23.989 "nvme_io": false, 00:10:23.989 "nvme_io_md": false, 00:10:23.989 "write_zeroes": true, 00:10:23.989 "zcopy": false, 00:10:23.989 "get_zone_info": false, 00:10:23.989 "zone_management": false, 00:10:23.989 "zone_append": false, 00:10:23.989 "compare": false, 00:10:23.989 "compare_and_write": false, 00:10:23.989 "abort": false, 00:10:23.989 "seek_hole": false, 00:10:23.989 "seek_data": false, 00:10:23.989 "copy": false, 00:10:23.989 "nvme_iov_md": false 00:10:23.989 }, 00:10:23.989 "memory_domains": [ 00:10:23.989 { 00:10:23.989 "dma_device_id": "system", 00:10:23.989 "dma_device_type": 1 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.989 "dma_device_type": 2 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "system", 00:10:23.989 "dma_device_type": 1 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.989 "dma_device_type": 2 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "system", 00:10:23.989 "dma_device_type": 1 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.989 "dma_device_type": 2 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "system", 00:10:23.989 "dma_device_type": 1 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.989 "dma_device_type": 2 00:10:23.989 } 00:10:23.989 ], 00:10:23.989 "driver_specific": { 00:10:23.989 "raid": { 00:10:23.989 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:23.989 "strip_size_kb": 64, 00:10:23.989 "state": "online", 00:10:23.989 "raid_level": "concat", 00:10:23.989 "superblock": true, 00:10:23.989 "num_base_bdevs": 4, 00:10:23.989 "num_base_bdevs_discovered": 4, 00:10:23.989 "num_base_bdevs_operational": 4, 00:10:23.989 "base_bdevs_list": [ 00:10:23.989 { 00:10:23.989 "name": "BaseBdev1", 00:10:23.989 "uuid": "5d0bf080-1f4b-4b70-999c-3e3fc0e678b7", 00:10:23.989 "is_configured": true, 00:10:23.989 "data_offset": 2048, 00:10:23.989 "data_size": 63488 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "name": "BaseBdev2", 00:10:23.989 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:23.989 "is_configured": true, 00:10:23.989 "data_offset": 2048, 00:10:23.989 "data_size": 63488 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "name": "BaseBdev3", 00:10:23.989 "uuid": "d21164b1-0fdb-47d5-af67-b940165db4d1", 00:10:23.989 "is_configured": true, 00:10:23.989 "data_offset": 2048, 00:10:23.989 "data_size": 63488 00:10:23.989 }, 00:10:23.989 { 00:10:23.989 "name": "BaseBdev4", 00:10:23.989 "uuid": "87d20a33-fe25-48b1-b546-062148ac389b", 00:10:23.989 "is_configured": true, 00:10:23.989 "data_offset": 2048, 00:10:23.989 "data_size": 63488 00:10:23.989 } 00:10:23.989 ] 00:10:23.989 } 00:10:23.989 } 00:10:23.989 }' 00:10:23.989 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:24.250 BaseBdev2 00:10:24.250 BaseBdev3 00:10:24.250 BaseBdev4' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.250 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 [2024-11-26 22:55:03.357852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.250 [2024-11-26 22:55:03.357929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.250 [2024-11-26 22:55:03.358062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:24.510 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.511 "name": "Existed_Raid", 00:10:24.511 "uuid": "4fe19830-52eb-44f3-9ced-75e985f43084", 00:10:24.511 "strip_size_kb": 64, 00:10:24.511 "state": "offline", 00:10:24.511 "raid_level": "concat", 00:10:24.511 "superblock": true, 00:10:24.511 "num_base_bdevs": 4, 00:10:24.511 "num_base_bdevs_discovered": 3, 00:10:24.511 "num_base_bdevs_operational": 3, 00:10:24.511 "base_bdevs_list": [ 00:10:24.511 { 00:10:24.511 "name": null, 00:10:24.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.511 "is_configured": false, 00:10:24.511 "data_offset": 0, 00:10:24.511 "data_size": 63488 00:10:24.511 }, 00:10:24.511 { 00:10:24.511 "name": "BaseBdev2", 00:10:24.511 "uuid": "27ce4a53-9be7-4689-9dd1-8b19addaf9c4", 00:10:24.511 "is_configured": true, 00:10:24.511 "data_offset": 2048, 00:10:24.511 "data_size": 63488 00:10:24.511 }, 00:10:24.511 { 00:10:24.511 "name": "BaseBdev3", 00:10:24.511 "uuid": "d21164b1-0fdb-47d5-af67-b940165db4d1", 00:10:24.511 "is_configured": true, 00:10:24.511 "data_offset": 2048, 00:10:24.511 "data_size": 63488 00:10:24.511 }, 00:10:24.511 { 00:10:24.511 "name": "BaseBdev4", 00:10:24.511 "uuid": "87d20a33-fe25-48b1-b546-062148ac389b", 00:10:24.511 "is_configured": true, 00:10:24.511 "data_offset": 2048, 00:10:24.511 "data_size": 63488 00:10:24.511 } 00:10:24.511 ] 00:10:24.511 }' 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.511 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.770 [2024-11-26 22:55:03.843287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.770 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 [2024-11-26 22:55:03.923795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 [2024-11-26 22:55:03.996626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:25.030 [2024-11-26 22:55:03.996747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 BaseBdev2 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.030 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.031 [ 00:10:25.031 { 00:10:25.031 "name": "BaseBdev2", 00:10:25.031 "aliases": [ 00:10:25.031 "e3417f9b-1f8b-42aa-93e3-49e08df54576" 00:10:25.031 ], 00:10:25.031 "product_name": "Malloc disk", 00:10:25.031 "block_size": 512, 00:10:25.031 "num_blocks": 65536, 00:10:25.031 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:25.031 "assigned_rate_limits": { 00:10:25.031 "rw_ios_per_sec": 0, 00:10:25.031 "rw_mbytes_per_sec": 0, 00:10:25.031 "r_mbytes_per_sec": 0, 00:10:25.031 "w_mbytes_per_sec": 0 00:10:25.031 }, 00:10:25.031 "claimed": false, 00:10:25.031 "zoned": false, 00:10:25.031 "supported_io_types": { 00:10:25.031 "read": true, 00:10:25.031 "write": true, 00:10:25.031 "unmap": true, 00:10:25.031 "flush": true, 00:10:25.031 "reset": true, 00:10:25.031 "nvme_admin": false, 00:10:25.031 "nvme_io": false, 00:10:25.031 "nvme_io_md": false, 00:10:25.031 "write_zeroes": true, 00:10:25.031 "zcopy": true, 00:10:25.031 "get_zone_info": false, 00:10:25.031 "zone_management": false, 00:10:25.031 "zone_append": false, 00:10:25.031 "compare": false, 00:10:25.031 "compare_and_write": false, 00:10:25.031 "abort": true, 00:10:25.031 "seek_hole": false, 00:10:25.031 "seek_data": false, 00:10:25.031 "copy": true, 00:10:25.031 "nvme_iov_md": false 00:10:25.031 }, 00:10:25.031 "memory_domains": [ 00:10:25.031 { 00:10:25.031 "dma_device_id": "system", 00:10:25.031 "dma_device_type": 1 00:10:25.031 }, 00:10:25.031 { 00:10:25.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.031 "dma_device_type": 2 00:10:25.031 } 00:10:25.031 ], 00:10:25.031 "driver_specific": {} 00:10:25.031 } 00:10:25.031 ] 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.031 BaseBdev3 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.031 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.291 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.291 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.291 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.291 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.291 [ 00:10:25.291 { 00:10:25.291 "name": "BaseBdev3", 00:10:25.291 "aliases": [ 00:10:25.291 "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e" 00:10:25.291 ], 00:10:25.291 "product_name": "Malloc disk", 00:10:25.291 "block_size": 512, 00:10:25.291 "num_blocks": 65536, 00:10:25.291 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:25.291 "assigned_rate_limits": { 00:10:25.291 "rw_ios_per_sec": 0, 00:10:25.291 "rw_mbytes_per_sec": 0, 00:10:25.291 "r_mbytes_per_sec": 0, 00:10:25.291 "w_mbytes_per_sec": 0 00:10:25.291 }, 00:10:25.291 "claimed": false, 00:10:25.291 "zoned": false, 00:10:25.291 "supported_io_types": { 00:10:25.291 "read": true, 00:10:25.291 "write": true, 00:10:25.291 "unmap": true, 00:10:25.291 "flush": true, 00:10:25.291 "reset": true, 00:10:25.291 "nvme_admin": false, 00:10:25.291 "nvme_io": false, 00:10:25.291 "nvme_io_md": false, 00:10:25.291 "write_zeroes": true, 00:10:25.291 "zcopy": true, 00:10:25.291 "get_zone_info": false, 00:10:25.291 "zone_management": false, 00:10:25.291 "zone_append": false, 00:10:25.291 "compare": false, 00:10:25.291 "compare_and_write": false, 00:10:25.292 "abort": true, 00:10:25.292 "seek_hole": false, 00:10:25.292 "seek_data": false, 00:10:25.292 "copy": true, 00:10:25.292 "nvme_iov_md": false 00:10:25.292 }, 00:10:25.292 "memory_domains": [ 00:10:25.292 { 00:10:25.292 "dma_device_id": "system", 00:10:25.292 "dma_device_type": 1 00:10:25.292 }, 00:10:25.292 { 00:10:25.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.292 "dma_device_type": 2 00:10:25.292 } 00:10:25.292 ], 00:10:25.292 "driver_specific": {} 00:10:25.292 } 00:10:25.292 ] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.292 BaseBdev4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.292 [ 00:10:25.292 { 00:10:25.292 "name": "BaseBdev4", 00:10:25.292 "aliases": [ 00:10:25.292 "aa5ff3d3-8d74-414c-82ab-81d8dadeb546" 00:10:25.292 ], 00:10:25.292 "product_name": "Malloc disk", 00:10:25.292 "block_size": 512, 00:10:25.292 "num_blocks": 65536, 00:10:25.292 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:25.292 "assigned_rate_limits": { 00:10:25.292 "rw_ios_per_sec": 0, 00:10:25.292 "rw_mbytes_per_sec": 0, 00:10:25.292 "r_mbytes_per_sec": 0, 00:10:25.292 "w_mbytes_per_sec": 0 00:10:25.292 }, 00:10:25.292 "claimed": false, 00:10:25.292 "zoned": false, 00:10:25.292 "supported_io_types": { 00:10:25.292 "read": true, 00:10:25.292 "write": true, 00:10:25.292 "unmap": true, 00:10:25.292 "flush": true, 00:10:25.292 "reset": true, 00:10:25.292 "nvme_admin": false, 00:10:25.292 "nvme_io": false, 00:10:25.292 "nvme_io_md": false, 00:10:25.292 "write_zeroes": true, 00:10:25.292 "zcopy": true, 00:10:25.292 "get_zone_info": false, 00:10:25.292 "zone_management": false, 00:10:25.292 "zone_append": false, 00:10:25.292 "compare": false, 00:10:25.292 "compare_and_write": false, 00:10:25.292 "abort": true, 00:10:25.292 "seek_hole": false, 00:10:25.292 "seek_data": false, 00:10:25.292 "copy": true, 00:10:25.292 "nvme_iov_md": false 00:10:25.292 }, 00:10:25.292 "memory_domains": [ 00:10:25.292 { 00:10:25.292 "dma_device_id": "system", 00:10:25.292 "dma_device_type": 1 00:10:25.292 }, 00:10:25.292 { 00:10:25.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.292 "dma_device_type": 2 00:10:25.292 } 00:10:25.292 ], 00:10:25.292 "driver_specific": {} 00:10:25.292 } 00:10:25.292 ] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.292 [2024-11-26 22:55:04.246946] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.292 [2024-11-26 22:55:04.247056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.292 [2024-11-26 22:55:04.247125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.292 [2024-11-26 22:55:04.249275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.292 [2024-11-26 22:55:04.249383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.292 "name": "Existed_Raid", 00:10:25.292 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:25.292 "strip_size_kb": 64, 00:10:25.292 "state": "configuring", 00:10:25.292 "raid_level": "concat", 00:10:25.292 "superblock": true, 00:10:25.292 "num_base_bdevs": 4, 00:10:25.292 "num_base_bdevs_discovered": 3, 00:10:25.292 "num_base_bdevs_operational": 4, 00:10:25.292 "base_bdevs_list": [ 00:10:25.292 { 00:10:25.292 "name": "BaseBdev1", 00:10:25.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.292 "is_configured": false, 00:10:25.292 "data_offset": 0, 00:10:25.292 "data_size": 0 00:10:25.292 }, 00:10:25.292 { 00:10:25.292 "name": "BaseBdev2", 00:10:25.292 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:25.292 "is_configured": true, 00:10:25.292 "data_offset": 2048, 00:10:25.292 "data_size": 63488 00:10:25.292 }, 00:10:25.292 { 00:10:25.292 "name": "BaseBdev3", 00:10:25.292 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:25.292 "is_configured": true, 00:10:25.292 "data_offset": 2048, 00:10:25.292 "data_size": 63488 00:10:25.292 }, 00:10:25.292 { 00:10:25.292 "name": "BaseBdev4", 00:10:25.292 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:25.292 "is_configured": true, 00:10:25.292 "data_offset": 2048, 00:10:25.292 "data_size": 63488 00:10:25.292 } 00:10:25.292 ] 00:10:25.292 }' 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.292 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 [2024-11-26 22:55:04.727014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.862 "name": "Existed_Raid", 00:10:25.862 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:25.862 "strip_size_kb": 64, 00:10:25.862 "state": "configuring", 00:10:25.862 "raid_level": "concat", 00:10:25.862 "superblock": true, 00:10:25.862 "num_base_bdevs": 4, 00:10:25.862 "num_base_bdevs_discovered": 2, 00:10:25.862 "num_base_bdevs_operational": 4, 00:10:25.862 "base_bdevs_list": [ 00:10:25.862 { 00:10:25.862 "name": "BaseBdev1", 00:10:25.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.862 "is_configured": false, 00:10:25.862 "data_offset": 0, 00:10:25.862 "data_size": 0 00:10:25.862 }, 00:10:25.862 { 00:10:25.862 "name": null, 00:10:25.862 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:25.862 "is_configured": false, 00:10:25.862 "data_offset": 0, 00:10:25.862 "data_size": 63488 00:10:25.862 }, 00:10:25.862 { 00:10:25.862 "name": "BaseBdev3", 00:10:25.862 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:25.862 "is_configured": true, 00:10:25.862 "data_offset": 2048, 00:10:25.862 "data_size": 63488 00:10:25.862 }, 00:10:25.862 { 00:10:25.862 "name": "BaseBdev4", 00:10:25.862 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:25.862 "is_configured": true, 00:10:25.862 "data_offset": 2048, 00:10:25.862 "data_size": 63488 00:10:25.862 } 00:10:25.862 ] 00:10:25.862 }' 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.862 22:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.122 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.122 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.122 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.123 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.123 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.382 [2024-11-26 22:55:05.272086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.382 BaseBdev1 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.382 [ 00:10:26.382 { 00:10:26.382 "name": "BaseBdev1", 00:10:26.382 "aliases": [ 00:10:26.382 "e19cb531-616d-4ae4-b95a-e8b0553b24fb" 00:10:26.382 ], 00:10:26.382 "product_name": "Malloc disk", 00:10:26.382 "block_size": 512, 00:10:26.382 "num_blocks": 65536, 00:10:26.382 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:26.382 "assigned_rate_limits": { 00:10:26.382 "rw_ios_per_sec": 0, 00:10:26.382 "rw_mbytes_per_sec": 0, 00:10:26.382 "r_mbytes_per_sec": 0, 00:10:26.382 "w_mbytes_per_sec": 0 00:10:26.382 }, 00:10:26.382 "claimed": true, 00:10:26.382 "claim_type": "exclusive_write", 00:10:26.382 "zoned": false, 00:10:26.382 "supported_io_types": { 00:10:26.382 "read": true, 00:10:26.382 "write": true, 00:10:26.382 "unmap": true, 00:10:26.382 "flush": true, 00:10:26.382 "reset": true, 00:10:26.382 "nvme_admin": false, 00:10:26.382 "nvme_io": false, 00:10:26.382 "nvme_io_md": false, 00:10:26.382 "write_zeroes": true, 00:10:26.382 "zcopy": true, 00:10:26.382 "get_zone_info": false, 00:10:26.382 "zone_management": false, 00:10:26.382 "zone_append": false, 00:10:26.382 "compare": false, 00:10:26.382 "compare_and_write": false, 00:10:26.382 "abort": true, 00:10:26.382 "seek_hole": false, 00:10:26.382 "seek_data": false, 00:10:26.382 "copy": true, 00:10:26.382 "nvme_iov_md": false 00:10:26.382 }, 00:10:26.382 "memory_domains": [ 00:10:26.382 { 00:10:26.382 "dma_device_id": "system", 00:10:26.382 "dma_device_type": 1 00:10:26.382 }, 00:10:26.382 { 00:10:26.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.382 "dma_device_type": 2 00:10:26.382 } 00:10:26.382 ], 00:10:26.382 "driver_specific": {} 00:10:26.382 } 00:10:26.382 ] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.382 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.383 "name": "Existed_Raid", 00:10:26.383 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:26.383 "strip_size_kb": 64, 00:10:26.383 "state": "configuring", 00:10:26.383 "raid_level": "concat", 00:10:26.383 "superblock": true, 00:10:26.383 "num_base_bdevs": 4, 00:10:26.383 "num_base_bdevs_discovered": 3, 00:10:26.383 "num_base_bdevs_operational": 4, 00:10:26.383 "base_bdevs_list": [ 00:10:26.383 { 00:10:26.383 "name": "BaseBdev1", 00:10:26.383 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:26.383 "is_configured": true, 00:10:26.383 "data_offset": 2048, 00:10:26.383 "data_size": 63488 00:10:26.383 }, 00:10:26.383 { 00:10:26.383 "name": null, 00:10:26.383 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:26.383 "is_configured": false, 00:10:26.383 "data_offset": 0, 00:10:26.383 "data_size": 63488 00:10:26.383 }, 00:10:26.383 { 00:10:26.383 "name": "BaseBdev3", 00:10:26.383 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:26.383 "is_configured": true, 00:10:26.383 "data_offset": 2048, 00:10:26.383 "data_size": 63488 00:10:26.383 }, 00:10:26.383 { 00:10:26.383 "name": "BaseBdev4", 00:10:26.383 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:26.383 "is_configured": true, 00:10:26.383 "data_offset": 2048, 00:10:26.383 "data_size": 63488 00:10:26.383 } 00:10:26.383 ] 00:10:26.383 }' 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.383 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.642 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.642 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.642 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.642 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.901 [2024-11-26 22:55:05.808282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.901 "name": "Existed_Raid", 00:10:26.901 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:26.901 "strip_size_kb": 64, 00:10:26.901 "state": "configuring", 00:10:26.901 "raid_level": "concat", 00:10:26.901 "superblock": true, 00:10:26.901 "num_base_bdevs": 4, 00:10:26.901 "num_base_bdevs_discovered": 2, 00:10:26.901 "num_base_bdevs_operational": 4, 00:10:26.901 "base_bdevs_list": [ 00:10:26.901 { 00:10:26.901 "name": "BaseBdev1", 00:10:26.901 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:26.901 "is_configured": true, 00:10:26.901 "data_offset": 2048, 00:10:26.901 "data_size": 63488 00:10:26.901 }, 00:10:26.901 { 00:10:26.901 "name": null, 00:10:26.901 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:26.901 "is_configured": false, 00:10:26.901 "data_offset": 0, 00:10:26.901 "data_size": 63488 00:10:26.901 }, 00:10:26.901 { 00:10:26.901 "name": null, 00:10:26.901 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:26.901 "is_configured": false, 00:10:26.901 "data_offset": 0, 00:10:26.901 "data_size": 63488 00:10:26.901 }, 00:10:26.901 { 00:10:26.901 "name": "BaseBdev4", 00:10:26.901 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:26.901 "is_configured": true, 00:10:26.901 "data_offset": 2048, 00:10:26.901 "data_size": 63488 00:10:26.901 } 00:10:26.901 ] 00:10:26.901 }' 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.901 22:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.161 [2024-11-26 22:55:06.240473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.161 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.424 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.424 "name": "Existed_Raid", 00:10:27.424 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:27.424 "strip_size_kb": 64, 00:10:27.424 "state": "configuring", 00:10:27.424 "raid_level": "concat", 00:10:27.424 "superblock": true, 00:10:27.424 "num_base_bdevs": 4, 00:10:27.424 "num_base_bdevs_discovered": 3, 00:10:27.424 "num_base_bdevs_operational": 4, 00:10:27.424 "base_bdevs_list": [ 00:10:27.424 { 00:10:27.424 "name": "BaseBdev1", 00:10:27.424 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:27.424 "is_configured": true, 00:10:27.424 "data_offset": 2048, 00:10:27.424 "data_size": 63488 00:10:27.424 }, 00:10:27.424 { 00:10:27.424 "name": null, 00:10:27.424 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:27.424 "is_configured": false, 00:10:27.424 "data_offset": 0, 00:10:27.424 "data_size": 63488 00:10:27.424 }, 00:10:27.424 { 00:10:27.424 "name": "BaseBdev3", 00:10:27.424 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:27.424 "is_configured": true, 00:10:27.424 "data_offset": 2048, 00:10:27.424 "data_size": 63488 00:10:27.424 }, 00:10:27.424 { 00:10:27.424 "name": "BaseBdev4", 00:10:27.424 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:27.424 "is_configured": true, 00:10:27.424 "data_offset": 2048, 00:10:27.424 "data_size": 63488 00:10:27.424 } 00:10:27.424 ] 00:10:27.424 }' 00:10:27.424 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.424 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.694 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 [2024-11-26 22:55:06.704615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.695 "name": "Existed_Raid", 00:10:27.695 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:27.695 "strip_size_kb": 64, 00:10:27.695 "state": "configuring", 00:10:27.695 "raid_level": "concat", 00:10:27.695 "superblock": true, 00:10:27.695 "num_base_bdevs": 4, 00:10:27.695 "num_base_bdevs_discovered": 2, 00:10:27.695 "num_base_bdevs_operational": 4, 00:10:27.695 "base_bdevs_list": [ 00:10:27.695 { 00:10:27.695 "name": null, 00:10:27.695 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:27.695 "is_configured": false, 00:10:27.695 "data_offset": 0, 00:10:27.695 "data_size": 63488 00:10:27.695 }, 00:10:27.695 { 00:10:27.695 "name": null, 00:10:27.695 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:27.695 "is_configured": false, 00:10:27.695 "data_offset": 0, 00:10:27.695 "data_size": 63488 00:10:27.695 }, 00:10:27.695 { 00:10:27.695 "name": "BaseBdev3", 00:10:27.695 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:27.695 "is_configured": true, 00:10:27.695 "data_offset": 2048, 00:10:27.695 "data_size": 63488 00:10:27.695 }, 00:10:27.695 { 00:10:27.695 "name": "BaseBdev4", 00:10:27.695 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:27.695 "is_configured": true, 00:10:27.695 "data_offset": 2048, 00:10:27.695 "data_size": 63488 00:10:27.695 } 00:10:27.695 ] 00:10:27.695 }' 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.695 22:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.280 [2024-11-26 22:55:07.216858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.280 "name": "Existed_Raid", 00:10:28.280 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:28.280 "strip_size_kb": 64, 00:10:28.280 "state": "configuring", 00:10:28.280 "raid_level": "concat", 00:10:28.280 "superblock": true, 00:10:28.280 "num_base_bdevs": 4, 00:10:28.280 "num_base_bdevs_discovered": 3, 00:10:28.280 "num_base_bdevs_operational": 4, 00:10:28.280 "base_bdevs_list": [ 00:10:28.280 { 00:10:28.280 "name": null, 00:10:28.280 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:28.280 "is_configured": false, 00:10:28.280 "data_offset": 0, 00:10:28.280 "data_size": 63488 00:10:28.280 }, 00:10:28.280 { 00:10:28.280 "name": "BaseBdev2", 00:10:28.280 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:28.280 "is_configured": true, 00:10:28.280 "data_offset": 2048, 00:10:28.280 "data_size": 63488 00:10:28.280 }, 00:10:28.280 { 00:10:28.280 "name": "BaseBdev3", 00:10:28.280 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:28.280 "is_configured": true, 00:10:28.280 "data_offset": 2048, 00:10:28.280 "data_size": 63488 00:10:28.280 }, 00:10:28.280 { 00:10:28.280 "name": "BaseBdev4", 00:10:28.280 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:28.280 "is_configured": true, 00:10:28.280 "data_offset": 2048, 00:10:28.280 "data_size": 63488 00:10:28.280 } 00:10:28.280 ] 00:10:28.280 }' 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.280 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.540 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.540 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.540 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.540 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.540 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e19cb531-616d-4ae4-b95a-e8b0553b24fb 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.800 [2024-11-26 22:55:07.761857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:28.800 [2024-11-26 22:55:07.762176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.800 [2024-11-26 22:55:07.762240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.800 NewBaseBdev 00:10:28.800 [2024-11-26 22:55:07.762597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:28.800 [2024-11-26 22:55:07.762738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.800 [2024-11-26 22:55:07.762755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.800 [2024-11-26 22:55:07.762877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.800 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.801 [ 00:10:28.801 { 00:10:28.801 "name": "NewBaseBdev", 00:10:28.801 "aliases": [ 00:10:28.801 "e19cb531-616d-4ae4-b95a-e8b0553b24fb" 00:10:28.801 ], 00:10:28.801 "product_name": "Malloc disk", 00:10:28.801 "block_size": 512, 00:10:28.801 "num_blocks": 65536, 00:10:28.801 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:28.801 "assigned_rate_limits": { 00:10:28.801 "rw_ios_per_sec": 0, 00:10:28.801 "rw_mbytes_per_sec": 0, 00:10:28.801 "r_mbytes_per_sec": 0, 00:10:28.801 "w_mbytes_per_sec": 0 00:10:28.801 }, 00:10:28.801 "claimed": true, 00:10:28.801 "claim_type": "exclusive_write", 00:10:28.801 "zoned": false, 00:10:28.801 "supported_io_types": { 00:10:28.801 "read": true, 00:10:28.801 "write": true, 00:10:28.801 "unmap": true, 00:10:28.801 "flush": true, 00:10:28.801 "reset": true, 00:10:28.801 "nvme_admin": false, 00:10:28.801 "nvme_io": false, 00:10:28.801 "nvme_io_md": false, 00:10:28.801 "write_zeroes": true, 00:10:28.801 "zcopy": true, 00:10:28.801 "get_zone_info": false, 00:10:28.801 "zone_management": false, 00:10:28.801 "zone_append": false, 00:10:28.801 "compare": false, 00:10:28.801 "compare_and_write": false, 00:10:28.801 "abort": true, 00:10:28.801 "seek_hole": false, 00:10:28.801 "seek_data": false, 00:10:28.801 "copy": true, 00:10:28.801 "nvme_iov_md": false 00:10:28.801 }, 00:10:28.801 "memory_domains": [ 00:10:28.801 { 00:10:28.801 "dma_device_id": "system", 00:10:28.801 "dma_device_type": 1 00:10:28.801 }, 00:10:28.801 { 00:10:28.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.801 "dma_device_type": 2 00:10:28.801 } 00:10:28.801 ], 00:10:28.801 "driver_specific": {} 00:10:28.801 } 00:10:28.801 ] 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.801 "name": "Existed_Raid", 00:10:28.801 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:28.801 "strip_size_kb": 64, 00:10:28.801 "state": "online", 00:10:28.801 "raid_level": "concat", 00:10:28.801 "superblock": true, 00:10:28.801 "num_base_bdevs": 4, 00:10:28.801 "num_base_bdevs_discovered": 4, 00:10:28.801 "num_base_bdevs_operational": 4, 00:10:28.801 "base_bdevs_list": [ 00:10:28.801 { 00:10:28.801 "name": "NewBaseBdev", 00:10:28.801 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:28.801 "is_configured": true, 00:10:28.801 "data_offset": 2048, 00:10:28.801 "data_size": 63488 00:10:28.801 }, 00:10:28.801 { 00:10:28.801 "name": "BaseBdev2", 00:10:28.801 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:28.801 "is_configured": true, 00:10:28.801 "data_offset": 2048, 00:10:28.801 "data_size": 63488 00:10:28.801 }, 00:10:28.801 { 00:10:28.801 "name": "BaseBdev3", 00:10:28.801 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:28.801 "is_configured": true, 00:10:28.801 "data_offset": 2048, 00:10:28.801 "data_size": 63488 00:10:28.801 }, 00:10:28.801 { 00:10:28.801 "name": "BaseBdev4", 00:10:28.801 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:28.801 "is_configured": true, 00:10:28.801 "data_offset": 2048, 00:10:28.801 "data_size": 63488 00:10:28.801 } 00:10:28.801 ] 00:10:28.801 }' 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.801 22:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.372 [2024-11-26 22:55:08.254326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.372 "name": "Existed_Raid", 00:10:29.372 "aliases": [ 00:10:29.372 "618f4ee0-2308-483a-b669-34f62f28fb53" 00:10:29.372 ], 00:10:29.372 "product_name": "Raid Volume", 00:10:29.372 "block_size": 512, 00:10:29.372 "num_blocks": 253952, 00:10:29.372 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:29.372 "assigned_rate_limits": { 00:10:29.372 "rw_ios_per_sec": 0, 00:10:29.372 "rw_mbytes_per_sec": 0, 00:10:29.372 "r_mbytes_per_sec": 0, 00:10:29.372 "w_mbytes_per_sec": 0 00:10:29.372 }, 00:10:29.372 "claimed": false, 00:10:29.372 "zoned": false, 00:10:29.372 "supported_io_types": { 00:10:29.372 "read": true, 00:10:29.372 "write": true, 00:10:29.372 "unmap": true, 00:10:29.372 "flush": true, 00:10:29.372 "reset": true, 00:10:29.372 "nvme_admin": false, 00:10:29.372 "nvme_io": false, 00:10:29.372 "nvme_io_md": false, 00:10:29.372 "write_zeroes": true, 00:10:29.372 "zcopy": false, 00:10:29.372 "get_zone_info": false, 00:10:29.372 "zone_management": false, 00:10:29.372 "zone_append": false, 00:10:29.372 "compare": false, 00:10:29.372 "compare_and_write": false, 00:10:29.372 "abort": false, 00:10:29.372 "seek_hole": false, 00:10:29.372 "seek_data": false, 00:10:29.372 "copy": false, 00:10:29.372 "nvme_iov_md": false 00:10:29.372 }, 00:10:29.372 "memory_domains": [ 00:10:29.372 { 00:10:29.372 "dma_device_id": "system", 00:10:29.372 "dma_device_type": 1 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.372 "dma_device_type": 2 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "system", 00:10:29.372 "dma_device_type": 1 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.372 "dma_device_type": 2 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "system", 00:10:29.372 "dma_device_type": 1 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.372 "dma_device_type": 2 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "system", 00:10:29.372 "dma_device_type": 1 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.372 "dma_device_type": 2 00:10:29.372 } 00:10:29.372 ], 00:10:29.372 "driver_specific": { 00:10:29.372 "raid": { 00:10:29.372 "uuid": "618f4ee0-2308-483a-b669-34f62f28fb53", 00:10:29.372 "strip_size_kb": 64, 00:10:29.372 "state": "online", 00:10:29.372 "raid_level": "concat", 00:10:29.372 "superblock": true, 00:10:29.372 "num_base_bdevs": 4, 00:10:29.372 "num_base_bdevs_discovered": 4, 00:10:29.372 "num_base_bdevs_operational": 4, 00:10:29.372 "base_bdevs_list": [ 00:10:29.372 { 00:10:29.372 "name": "NewBaseBdev", 00:10:29.372 "uuid": "e19cb531-616d-4ae4-b95a-e8b0553b24fb", 00:10:29.372 "is_configured": true, 00:10:29.372 "data_offset": 2048, 00:10:29.372 "data_size": 63488 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "name": "BaseBdev2", 00:10:29.372 "uuid": "e3417f9b-1f8b-42aa-93e3-49e08df54576", 00:10:29.372 "is_configured": true, 00:10:29.372 "data_offset": 2048, 00:10:29.372 "data_size": 63488 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "name": "BaseBdev3", 00:10:29.372 "uuid": "febcbe88-b34a-4eb0-b73b-e8ec6f9ce13e", 00:10:29.372 "is_configured": true, 00:10:29.372 "data_offset": 2048, 00:10:29.372 "data_size": 63488 00:10:29.372 }, 00:10:29.372 { 00:10:29.372 "name": "BaseBdev4", 00:10:29.372 "uuid": "aa5ff3d3-8d74-414c-82ab-81d8dadeb546", 00:10:29.372 "is_configured": true, 00:10:29.372 "data_offset": 2048, 00:10:29.372 "data_size": 63488 00:10:29.372 } 00:10:29.372 ] 00:10:29.372 } 00:10:29.372 } 00:10:29.372 }' 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:29.372 BaseBdev2 00:10:29.372 BaseBdev3 00:10:29.372 BaseBdev4' 00:10:29.372 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.373 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.633 [2024-11-26 22:55:08.594092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.633 [2024-11-26 22:55:08.594165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.633 [2024-11-26 22:55:08.594301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.633 [2024-11-26 22:55:08.594421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.633 [2024-11-26 22:55:08.594504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84426 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84426 ']' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84426 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84426 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84426' 00:10:29.633 killing process with pid 84426 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84426 00:10:29.633 [2024-11-26 22:55:08.635310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.633 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84426 00:10:29.633 [2024-11-26 22:55:08.711758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.212 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.212 ************************************ 00:10:30.212 END TEST raid_state_function_test_sb 00:10:30.212 ************************************ 00:10:30.212 00:10:30.212 real 0m9.906s 00:10:30.212 user 0m16.579s 00:10:30.212 sys 0m2.207s 00:10:30.212 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.212 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.212 22:55:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:30.212 22:55:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.212 22:55:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.212 22:55:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.212 ************************************ 00:10:30.212 START TEST raid_superblock_test 00:10:30.212 ************************************ 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:30.212 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85080 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85080 00:10:30.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85080 ']' 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.213 22:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.213 [2024-11-26 22:55:09.213234] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:30.213 [2024-11-26 22:55:09.213369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85080 ] 00:10:30.479 [2024-11-26 22:55:09.348734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.479 [2024-11-26 22:55:09.389221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.479 [2024-11-26 22:55:09.428073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.479 [2024-11-26 22:55:09.506182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.479 [2024-11-26 22:55:09.506336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.049 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 malloc1 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-11-26 22:55:10.069080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.050 [2024-11-26 22:55:10.069233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.050 [2024-11-26 22:55:10.069298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.050 [2024-11-26 22:55:10.069354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.050 [2024-11-26 22:55:10.071891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.050 [2024-11-26 22:55:10.071988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.050 pt1 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 malloc2 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-11-26 22:55:10.103832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.050 [2024-11-26 22:55:10.103948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.050 [2024-11-26 22:55:10.103992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:31.050 [2024-11-26 22:55:10.104030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.050 [2024-11-26 22:55:10.106467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.050 [2024-11-26 22:55:10.106509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.050 pt2 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 malloc3 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-11-26 22:55:10.138606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.050 [2024-11-26 22:55:10.138717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.050 [2024-11-26 22:55:10.138764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:31.050 [2024-11-26 22:55:10.138803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.050 [2024-11-26 22:55:10.141277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.050 [2024-11-26 22:55:10.141369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.050 pt3 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.050 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 malloc4 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 [2024-11-26 22:55:10.184213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:31.310 [2024-11-26 22:55:10.184356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.310 [2024-11-26 22:55:10.184407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:31.310 [2024-11-26 22:55:10.184445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.310 [2024-11-26 22:55:10.186952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.310 [2024-11-26 22:55:10.187038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:31.310 pt4 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 [2024-11-26 22:55:10.196290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:31.310 [2024-11-26 22:55:10.198551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.310 [2024-11-26 22:55:10.198662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.310 [2024-11-26 22:55:10.198711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:31.310 [2024-11-26 22:55:10.198879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:31.310 [2024-11-26 22:55:10.198892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:31.310 [2024-11-26 22:55:10.199185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:31.310 [2024-11-26 22:55:10.199367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:31.310 [2024-11-26 22:55:10.199384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:31.310 [2024-11-26 22:55:10.199538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.310 "name": "raid_bdev1", 00:10:31.310 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:31.310 "strip_size_kb": 64, 00:10:31.310 "state": "online", 00:10:31.310 "raid_level": "concat", 00:10:31.310 "superblock": true, 00:10:31.310 "num_base_bdevs": 4, 00:10:31.310 "num_base_bdevs_discovered": 4, 00:10:31.310 "num_base_bdevs_operational": 4, 00:10:31.310 "base_bdevs_list": [ 00:10:31.310 { 00:10:31.310 "name": "pt1", 00:10:31.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": "pt2", 00:10:31.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": "pt3", 00:10:31.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": "pt4", 00:10:31.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 } 00:10:31.310 ] 00:10:31.310 }' 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.310 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.569 [2024-11-26 22:55:10.640741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.569 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.569 "name": "raid_bdev1", 00:10:31.569 "aliases": [ 00:10:31.569 "0b5a43f4-db9b-44de-9893-003d82d68bbe" 00:10:31.569 ], 00:10:31.569 "product_name": "Raid Volume", 00:10:31.569 "block_size": 512, 00:10:31.569 "num_blocks": 253952, 00:10:31.569 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:31.569 "assigned_rate_limits": { 00:10:31.569 "rw_ios_per_sec": 0, 00:10:31.569 "rw_mbytes_per_sec": 0, 00:10:31.569 "r_mbytes_per_sec": 0, 00:10:31.569 "w_mbytes_per_sec": 0 00:10:31.569 }, 00:10:31.569 "claimed": false, 00:10:31.569 "zoned": false, 00:10:31.569 "supported_io_types": { 00:10:31.569 "read": true, 00:10:31.569 "write": true, 00:10:31.569 "unmap": true, 00:10:31.569 "flush": true, 00:10:31.569 "reset": true, 00:10:31.569 "nvme_admin": false, 00:10:31.569 "nvme_io": false, 00:10:31.569 "nvme_io_md": false, 00:10:31.569 "write_zeroes": true, 00:10:31.569 "zcopy": false, 00:10:31.569 "get_zone_info": false, 00:10:31.569 "zone_management": false, 00:10:31.569 "zone_append": false, 00:10:31.569 "compare": false, 00:10:31.569 "compare_and_write": false, 00:10:31.569 "abort": false, 00:10:31.569 "seek_hole": false, 00:10:31.569 "seek_data": false, 00:10:31.569 "copy": false, 00:10:31.569 "nvme_iov_md": false 00:10:31.569 }, 00:10:31.569 "memory_domains": [ 00:10:31.569 { 00:10:31.569 "dma_device_id": "system", 00:10:31.569 "dma_device_type": 1 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.569 "dma_device_type": 2 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "system", 00:10:31.569 "dma_device_type": 1 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.569 "dma_device_type": 2 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "system", 00:10:31.569 "dma_device_type": 1 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.569 "dma_device_type": 2 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "system", 00:10:31.569 "dma_device_type": 1 00:10:31.569 }, 00:10:31.569 { 00:10:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.569 "dma_device_type": 2 00:10:31.569 } 00:10:31.569 ], 00:10:31.569 "driver_specific": { 00:10:31.569 "raid": { 00:10:31.569 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:31.569 "strip_size_kb": 64, 00:10:31.569 "state": "online", 00:10:31.570 "raid_level": "concat", 00:10:31.570 "superblock": true, 00:10:31.570 "num_base_bdevs": 4, 00:10:31.570 "num_base_bdevs_discovered": 4, 00:10:31.570 "num_base_bdevs_operational": 4, 00:10:31.570 "base_bdevs_list": [ 00:10:31.570 { 00:10:31.570 "name": "pt1", 00:10:31.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.570 "is_configured": true, 00:10:31.570 "data_offset": 2048, 00:10:31.570 "data_size": 63488 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "name": "pt2", 00:10:31.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.570 "is_configured": true, 00:10:31.570 "data_offset": 2048, 00:10:31.570 "data_size": 63488 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "name": "pt3", 00:10:31.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.570 "is_configured": true, 00:10:31.570 "data_offset": 2048, 00:10:31.570 "data_size": 63488 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "name": "pt4", 00:10:31.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:31.570 "is_configured": true, 00:10:31.570 "data_offset": 2048, 00:10:31.570 "data_size": 63488 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 } 00:10:31.570 } 00:10:31.570 }' 00:10:31.570 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:31.830 pt2 00:10:31.830 pt3 00:10:31.830 pt4' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.830 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 [2024-11-26 22:55:10.956686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0b5a43f4-db9b-44de-9893-003d82d68bbe 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0b5a43f4-db9b-44de-9893-003d82d68bbe ']' 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 [2024-11-26 22:55:11.000403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.090 [2024-11-26 22:55:11.000432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.090 [2024-11-26 22:55:11.000526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.090 [2024-11-26 22:55:11.000622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.090 [2024-11-26 22:55:11.000645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.090 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.091 [2024-11-26 22:55:11.160523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:32.091 [2024-11-26 22:55:11.162770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:32.091 [2024-11-26 22:55:11.162829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:32.091 [2024-11-26 22:55:11.162866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:32.091 [2024-11-26 22:55:11.162921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:32.091 [2024-11-26 22:55:11.162975] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:32.091 [2024-11-26 22:55:11.163003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:32.091 [2024-11-26 22:55:11.163026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:32.091 [2024-11-26 22:55:11.163041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.091 [2024-11-26 22:55:11.163055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:32.091 request: 00:10:32.091 { 00:10:32.091 "name": "raid_bdev1", 00:10:32.091 "raid_level": "concat", 00:10:32.091 "base_bdevs": [ 00:10:32.091 "malloc1", 00:10:32.091 "malloc2", 00:10:32.091 "malloc3", 00:10:32.091 "malloc4" 00:10:32.091 ], 00:10:32.091 "strip_size_kb": 64, 00:10:32.091 "superblock": false, 00:10:32.091 "method": "bdev_raid_create", 00:10:32.091 "req_id": 1 00:10:32.091 } 00:10:32.091 Got JSON-RPC error response 00:10:32.091 response: 00:10:32.091 { 00:10:32.091 "code": -17, 00:10:32.091 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:32.091 } 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:32.091 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 [2024-11-26 22:55:11.224490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.350 [2024-11-26 22:55:11.224554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.350 [2024-11-26 22:55:11.224577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:32.350 [2024-11-26 22:55:11.224590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.350 [2024-11-26 22:55:11.227123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.350 [2024-11-26 22:55:11.227171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.350 [2024-11-26 22:55:11.227273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:32.350 [2024-11-26 22:55:11.227325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.350 pt1 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.351 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.351 "name": "raid_bdev1", 00:10:32.351 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:32.351 "strip_size_kb": 64, 00:10:32.351 "state": "configuring", 00:10:32.351 "raid_level": "concat", 00:10:32.351 "superblock": true, 00:10:32.351 "num_base_bdevs": 4, 00:10:32.351 "num_base_bdevs_discovered": 1, 00:10:32.351 "num_base_bdevs_operational": 4, 00:10:32.351 "base_bdevs_list": [ 00:10:32.351 { 00:10:32.351 "name": "pt1", 00:10:32.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.351 "is_configured": true, 00:10:32.351 "data_offset": 2048, 00:10:32.351 "data_size": 63488 00:10:32.351 }, 00:10:32.351 { 00:10:32.351 "name": null, 00:10:32.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.351 "is_configured": false, 00:10:32.351 "data_offset": 2048, 00:10:32.351 "data_size": 63488 00:10:32.351 }, 00:10:32.351 { 00:10:32.351 "name": null, 00:10:32.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.351 "is_configured": false, 00:10:32.351 "data_offset": 2048, 00:10:32.351 "data_size": 63488 00:10:32.351 }, 00:10:32.351 { 00:10:32.351 "name": null, 00:10:32.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:32.351 "is_configured": false, 00:10:32.351 "data_offset": 2048, 00:10:32.351 "data_size": 63488 00:10:32.351 } 00:10:32.351 ] 00:10:32.351 }' 00:10:32.351 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.351 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 [2024-11-26 22:55:11.632627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.610 [2024-11-26 22:55:11.632705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.610 [2024-11-26 22:55:11.632733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:32.610 [2024-11-26 22:55:11.632746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.610 [2024-11-26 22:55:11.633269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.610 [2024-11-26 22:55:11.633308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.610 [2024-11-26 22:55:11.633404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:32.610 [2024-11-26 22:55:11.633443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.610 pt2 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 [2024-11-26 22:55:11.644617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.610 "name": "raid_bdev1", 00:10:32.610 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:32.610 "strip_size_kb": 64, 00:10:32.610 "state": "configuring", 00:10:32.610 "raid_level": "concat", 00:10:32.610 "superblock": true, 00:10:32.610 "num_base_bdevs": 4, 00:10:32.610 "num_base_bdevs_discovered": 1, 00:10:32.610 "num_base_bdevs_operational": 4, 00:10:32.610 "base_bdevs_list": [ 00:10:32.610 { 00:10:32.610 "name": "pt1", 00:10:32.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.610 "is_configured": true, 00:10:32.610 "data_offset": 2048, 00:10:32.610 "data_size": 63488 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": null, 00:10:32.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.610 "is_configured": false, 00:10:32.610 "data_offset": 0, 00:10:32.610 "data_size": 63488 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": null, 00:10:32.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.610 "is_configured": false, 00:10:32.610 "data_offset": 2048, 00:10:32.610 "data_size": 63488 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": null, 00:10:32.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:32.610 "is_configured": false, 00:10:32.610 "data_offset": 2048, 00:10:32.610 "data_size": 63488 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }' 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.610 22:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.178 [2024-11-26 22:55:12.060732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.178 [2024-11-26 22:55:12.060812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.178 [2024-11-26 22:55:12.060839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:33.178 [2024-11-26 22:55:12.060850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.178 [2024-11-26 22:55:12.061383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.178 [2024-11-26 22:55:12.061413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.178 [2024-11-26 22:55:12.061516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.178 [2024-11-26 22:55:12.061550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.178 pt2 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.178 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.178 [2024-11-26 22:55:12.072711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.178 [2024-11-26 22:55:12.072773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.178 [2024-11-26 22:55:12.072797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:33.178 [2024-11-26 22:55:12.072806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.179 [2024-11-26 22:55:12.073233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.179 [2024-11-26 22:55:12.073274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.179 [2024-11-26 22:55:12.073348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:33.179 [2024-11-26 22:55:12.073384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.179 pt3 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.179 [2024-11-26 22:55:12.084697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:33.179 [2024-11-26 22:55:12.084748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.179 [2024-11-26 22:55:12.084767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:33.179 [2024-11-26 22:55:12.084777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.179 [2024-11-26 22:55:12.085147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.179 [2024-11-26 22:55:12.085174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:33.179 [2024-11-26 22:55:12.085245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:33.179 [2024-11-26 22:55:12.085284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:33.179 [2024-11-26 22:55:12.085402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:33.179 [2024-11-26 22:55:12.085420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.179 [2024-11-26 22:55:12.085697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:33.179 [2024-11-26 22:55:12.085850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:33.179 [2024-11-26 22:55:12.085872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:33.179 [2024-11-26 22:55:12.085988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.179 pt4 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.179 "name": "raid_bdev1", 00:10:33.179 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:33.179 "strip_size_kb": 64, 00:10:33.179 "state": "online", 00:10:33.179 "raid_level": "concat", 00:10:33.179 "superblock": true, 00:10:33.179 "num_base_bdevs": 4, 00:10:33.179 "num_base_bdevs_discovered": 4, 00:10:33.179 "num_base_bdevs_operational": 4, 00:10:33.179 "base_bdevs_list": [ 00:10:33.179 { 00:10:33.179 "name": "pt1", 00:10:33.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.179 "is_configured": true, 00:10:33.179 "data_offset": 2048, 00:10:33.179 "data_size": 63488 00:10:33.179 }, 00:10:33.179 { 00:10:33.179 "name": "pt2", 00:10:33.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.179 "is_configured": true, 00:10:33.179 "data_offset": 2048, 00:10:33.179 "data_size": 63488 00:10:33.179 }, 00:10:33.179 { 00:10:33.179 "name": "pt3", 00:10:33.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.179 "is_configured": true, 00:10:33.179 "data_offset": 2048, 00:10:33.179 "data_size": 63488 00:10:33.179 }, 00:10:33.179 { 00:10:33.179 "name": "pt4", 00:10:33.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.179 "is_configured": true, 00:10:33.179 "data_offset": 2048, 00:10:33.179 "data_size": 63488 00:10:33.179 } 00:10:33.179 ] 00:10:33.179 }' 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.179 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.438 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.699 [2024-11-26 22:55:12.569198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.699 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.699 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.699 "name": "raid_bdev1", 00:10:33.699 "aliases": [ 00:10:33.699 "0b5a43f4-db9b-44de-9893-003d82d68bbe" 00:10:33.699 ], 00:10:33.699 "product_name": "Raid Volume", 00:10:33.699 "block_size": 512, 00:10:33.699 "num_blocks": 253952, 00:10:33.699 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:33.699 "assigned_rate_limits": { 00:10:33.699 "rw_ios_per_sec": 0, 00:10:33.699 "rw_mbytes_per_sec": 0, 00:10:33.699 "r_mbytes_per_sec": 0, 00:10:33.699 "w_mbytes_per_sec": 0 00:10:33.699 }, 00:10:33.699 "claimed": false, 00:10:33.699 "zoned": false, 00:10:33.699 "supported_io_types": { 00:10:33.699 "read": true, 00:10:33.699 "write": true, 00:10:33.699 "unmap": true, 00:10:33.699 "flush": true, 00:10:33.699 "reset": true, 00:10:33.699 "nvme_admin": false, 00:10:33.699 "nvme_io": false, 00:10:33.699 "nvme_io_md": false, 00:10:33.699 "write_zeroes": true, 00:10:33.699 "zcopy": false, 00:10:33.699 "get_zone_info": false, 00:10:33.699 "zone_management": false, 00:10:33.699 "zone_append": false, 00:10:33.699 "compare": false, 00:10:33.699 "compare_and_write": false, 00:10:33.699 "abort": false, 00:10:33.699 "seek_hole": false, 00:10:33.699 "seek_data": false, 00:10:33.699 "copy": false, 00:10:33.699 "nvme_iov_md": false 00:10:33.699 }, 00:10:33.699 "memory_domains": [ 00:10:33.699 { 00:10:33.699 "dma_device_id": "system", 00:10:33.699 "dma_device_type": 1 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.699 "dma_device_type": 2 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "system", 00:10:33.699 "dma_device_type": 1 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.699 "dma_device_type": 2 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "system", 00:10:33.699 "dma_device_type": 1 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.699 "dma_device_type": 2 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "system", 00:10:33.699 "dma_device_type": 1 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.699 "dma_device_type": 2 00:10:33.699 } 00:10:33.699 ], 00:10:33.699 "driver_specific": { 00:10:33.699 "raid": { 00:10:33.699 "uuid": "0b5a43f4-db9b-44de-9893-003d82d68bbe", 00:10:33.699 "strip_size_kb": 64, 00:10:33.699 "state": "online", 00:10:33.699 "raid_level": "concat", 00:10:33.699 "superblock": true, 00:10:33.699 "num_base_bdevs": 4, 00:10:33.699 "num_base_bdevs_discovered": 4, 00:10:33.699 "num_base_bdevs_operational": 4, 00:10:33.699 "base_bdevs_list": [ 00:10:33.699 { 00:10:33.699 "name": "pt1", 00:10:33.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.699 "is_configured": true, 00:10:33.699 "data_offset": 2048, 00:10:33.699 "data_size": 63488 00:10:33.699 }, 00:10:33.699 { 00:10:33.699 "name": "pt2", 00:10:33.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.700 "is_configured": true, 00:10:33.700 "data_offset": 2048, 00:10:33.700 "data_size": 63488 00:10:33.700 }, 00:10:33.700 { 00:10:33.700 "name": "pt3", 00:10:33.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.700 "is_configured": true, 00:10:33.700 "data_offset": 2048, 00:10:33.700 "data_size": 63488 00:10:33.700 }, 00:10:33.700 { 00:10:33.700 "name": "pt4", 00:10:33.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.700 "is_configured": true, 00:10:33.700 "data_offset": 2048, 00:10:33.700 "data_size": 63488 00:10:33.700 } 00:10:33.700 ] 00:10:33.700 } 00:10:33.700 } 00:10:33.700 }' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:33.700 pt2 00:10:33.700 pt3 00:10:33.700 pt4' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.700 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:33.960 [2024-11-26 22:55:12.897270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0b5a43f4-db9b-44de-9893-003d82d68bbe '!=' 0b5a43f4-db9b-44de-9893-003d82d68bbe ']' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85080 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85080 ']' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85080 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85080 00:10:33.960 killing process with pid 85080 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85080' 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85080 00:10:33.960 [2024-11-26 22:55:12.980486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.960 [2024-11-26 22:55:12.980582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.960 22:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85080 00:10:33.960 [2024-11-26 22:55:12.980684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.960 [2024-11-26 22:55:12.980696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:33.960 [2024-11-26 22:55:13.062050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.530 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:34.530 00:10:34.530 real 0m4.273s 00:10:34.530 user 0m6.496s 00:10:34.530 sys 0m1.054s 00:10:34.530 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.530 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.530 ************************************ 00:10:34.530 END TEST raid_superblock_test 00:10:34.530 ************************************ 00:10:34.530 22:55:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:34.530 22:55:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.530 22:55:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.530 22:55:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.530 ************************************ 00:10:34.530 START TEST raid_read_error_test 00:10:34.530 ************************************ 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wkpIP6haX0 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85329 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85329 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85329 ']' 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.530 22:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.530 [2024-11-26 22:55:13.571555] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:34.530 [2024-11-26 22:55:13.571691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85329 ] 00:10:34.790 [2024-11-26 22:55:13.707640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:34.790 [2024-11-26 22:55:13.747311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.790 [2024-11-26 22:55:13.786770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.790 [2024-11-26 22:55:13.864216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.790 [2024-11-26 22:55:13.864268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 BaseBdev1_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 true 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 [2024-11-26 22:55:14.439034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.359 [2024-11-26 22:55:14.439114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.359 [2024-11-26 22:55:14.439136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.359 [2024-11-26 22:55:14.439153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.359 [2024-11-26 22:55:14.441680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.359 [2024-11-26 22:55:14.441723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.359 BaseBdev1 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 BaseBdev2_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 true 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.359 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-11-26 22:55:14.485872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.619 [2024-11-26 22:55:14.485935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.619 [2024-11-26 22:55:14.485970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.619 [2024-11-26 22:55:14.485984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.619 [2024-11-26 22:55:14.488495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.619 [2024-11-26 22:55:14.488535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.619 BaseBdev2 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 BaseBdev3_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 true 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-11-26 22:55:14.532922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:35.619 [2024-11-26 22:55:14.532984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.619 [2024-11-26 22:55:14.533019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:35.619 [2024-11-26 22:55:14.533033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.619 [2024-11-26 22:55:14.535396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.619 [2024-11-26 22:55:14.535443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:35.619 BaseBdev3 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 BaseBdev4_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 true 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-11-26 22:55:14.587520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:35.619 [2024-11-26 22:55:14.587583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.619 [2024-11-26 22:55:14.587620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:35.619 [2024-11-26 22:55:14.587633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.619 [2024-11-26 22:55:14.589970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.619 [2024-11-26 22:55:14.590015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:35.619 BaseBdev4 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-11-26 22:55:14.599581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.619 [2024-11-26 22:55:14.601697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.619 [2024-11-26 22:55:14.601778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.619 [2024-11-26 22:55:14.601837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.619 [2024-11-26 22:55:14.602069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.619 [2024-11-26 22:55:14.602095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.619 [2024-11-26 22:55:14.602376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:35.619 [2024-11-26 22:55:14.602535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.619 [2024-11-26 22:55:14.602559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:35.619 [2024-11-26 22:55:14.602708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.619 "name": "raid_bdev1", 00:10:35.619 "uuid": "b0acc726-50f4-4fa1-bfee-729240fb3854", 00:10:35.619 "strip_size_kb": 64, 00:10:35.619 "state": "online", 00:10:35.619 "raid_level": "concat", 00:10:35.619 "superblock": true, 00:10:35.619 "num_base_bdevs": 4, 00:10:35.619 "num_base_bdevs_discovered": 4, 00:10:35.619 "num_base_bdevs_operational": 4, 00:10:35.619 "base_bdevs_list": [ 00:10:35.619 { 00:10:35.619 "name": "BaseBdev1", 00:10:35.619 "uuid": "8607cba3-dbde-535c-a653-b163a6cb5830", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 2048, 00:10:35.619 "data_size": 63488 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev2", 00:10:35.619 "uuid": "eebb4e97-f417-5be9-a641-853bca81c0c8", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 2048, 00:10:35.619 "data_size": 63488 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev3", 00:10:35.619 "uuid": "939fc54d-82b8-5ca9-8ed8-e22f9a1672b9", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 2048, 00:10:35.619 "data_size": 63488 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev4", 00:10:35.619 "uuid": "1240d2d4-825e-588c-bffc-cafdc57359f6", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 2048, 00:10:35.619 "data_size": 63488 00:10:35.619 } 00:10:35.619 ] 00:10:35.619 }' 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.619 22:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.190 22:55:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.190 22:55:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.190 [2024-11-26 22:55:15.156210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.128 "name": "raid_bdev1", 00:10:37.128 "uuid": "b0acc726-50f4-4fa1-bfee-729240fb3854", 00:10:37.128 "strip_size_kb": 64, 00:10:37.128 "state": "online", 00:10:37.128 "raid_level": "concat", 00:10:37.128 "superblock": true, 00:10:37.128 "num_base_bdevs": 4, 00:10:37.128 "num_base_bdevs_discovered": 4, 00:10:37.128 "num_base_bdevs_operational": 4, 00:10:37.128 "base_bdevs_list": [ 00:10:37.128 { 00:10:37.128 "name": "BaseBdev1", 00:10:37.128 "uuid": "8607cba3-dbde-535c-a653-b163a6cb5830", 00:10:37.128 "is_configured": true, 00:10:37.128 "data_offset": 2048, 00:10:37.128 "data_size": 63488 00:10:37.128 }, 00:10:37.128 { 00:10:37.128 "name": "BaseBdev2", 00:10:37.128 "uuid": "eebb4e97-f417-5be9-a641-853bca81c0c8", 00:10:37.128 "is_configured": true, 00:10:37.128 "data_offset": 2048, 00:10:37.128 "data_size": 63488 00:10:37.128 }, 00:10:37.128 { 00:10:37.128 "name": "BaseBdev3", 00:10:37.128 "uuid": "939fc54d-82b8-5ca9-8ed8-e22f9a1672b9", 00:10:37.128 "is_configured": true, 00:10:37.128 "data_offset": 2048, 00:10:37.128 "data_size": 63488 00:10:37.128 }, 00:10:37.128 { 00:10:37.128 "name": "BaseBdev4", 00:10:37.128 "uuid": "1240d2d4-825e-588c-bffc-cafdc57359f6", 00:10:37.128 "is_configured": true, 00:10:37.128 "data_offset": 2048, 00:10:37.128 "data_size": 63488 00:10:37.128 } 00:10:37.128 ] 00:10:37.128 }' 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.128 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 [2024-11-26 22:55:16.584729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.696 [2024-11-26 22:55:16.584779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.696 [2024-11-26 22:55:16.587444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.696 [2024-11-26 22:55:16.587520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.696 [2024-11-26 22:55:16.587571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.696 [2024-11-26 22:55:16.587594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:37.696 { 00:10:37.696 "results": [ 00:10:37.696 { 00:10:37.696 "job": "raid_bdev1", 00:10:37.696 "core_mask": "0x1", 00:10:37.696 "workload": "randrw", 00:10:37.696 "percentage": 50, 00:10:37.696 "status": "finished", 00:10:37.696 "queue_depth": 1, 00:10:37.696 "io_size": 131072, 00:10:37.696 "runtime": 1.426305, 00:10:37.696 "iops": 14109.885333080932, 00:10:37.696 "mibps": 1763.7356666351166, 00:10:37.696 "io_failed": 1, 00:10:37.696 "io_timeout": 0, 00:10:37.696 "avg_latency_us": 99.19255885688834, 00:10:37.696 "min_latency_us": 25.660245794473983, 00:10:37.696 "max_latency_us": 1385.2070077573433 00:10:37.696 } 00:10:37.696 ], 00:10:37.696 "core_count": 1 00:10:37.696 } 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85329 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85329 ']' 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85329 00:10:37.696 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85329 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.697 killing process with pid 85329 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85329' 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85329 00:10:37.697 [2024-11-26 22:55:16.633944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.697 22:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85329 00:10:37.697 [2024-11-26 22:55:16.699507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wkpIP6haX0 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:37.955 00:10:37.955 real 0m3.572s 00:10:37.955 user 0m4.409s 00:10:37.955 sys 0m0.647s 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.955 22:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.956 ************************************ 00:10:37.956 END TEST raid_read_error_test 00:10:37.956 ************************************ 00:10:38.215 22:55:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:38.215 22:55:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.215 22:55:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.215 22:55:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.215 ************************************ 00:10:38.215 START TEST raid_write_error_test 00:10:38.215 ************************************ 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:38.215 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MFGdlfqhQG 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85463 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85463 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85463 ']' 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.216 22:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 [2024-11-26 22:55:17.230079] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:38.216 [2024-11-26 22:55:17.230208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85463 ] 00:10:38.478 [2024-11-26 22:55:17.370564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:38.478 [2024-11-26 22:55:17.409815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.478 [2024-11-26 22:55:17.448445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.478 [2024-11-26 22:55:17.525155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.478 [2024-11-26 22:55:17.525214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.046 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 BaseBdev1_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 true 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 [2024-11-26 22:55:18.084142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:39.047 [2024-11-26 22:55:18.084222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.047 [2024-11-26 22:55:18.084249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:39.047 [2024-11-26 22:55:18.084279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.047 [2024-11-26 22:55:18.086759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.047 [2024-11-26 22:55:18.086807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:39.047 BaseBdev1 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 BaseBdev2_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 true 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 [2024-11-26 22:55:18.130698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.047 [2024-11-26 22:55:18.130769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.047 [2024-11-26 22:55:18.130787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:39.047 [2024-11-26 22:55:18.130801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.047 [2024-11-26 22:55:18.133193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.047 [2024-11-26 22:55:18.133232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.047 BaseBdev2 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 BaseBdev3_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 true 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.047 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.307 [2024-11-26 22:55:18.177696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:39.307 [2024-11-26 22:55:18.177751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.307 [2024-11-26 22:55:18.177770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:39.307 [2024-11-26 22:55:18.177784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.307 [2024-11-26 22:55:18.180218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.307 [2024-11-26 22:55:18.180273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:39.307 BaseBdev3 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.307 BaseBdev4_malloc 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.307 true 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.307 [2024-11-26 22:55:18.237937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:39.307 [2024-11-26 22:55:18.238013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.307 [2024-11-26 22:55:18.238034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.307 [2024-11-26 22:55:18.238048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.307 [2024-11-26 22:55:18.240594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.307 [2024-11-26 22:55:18.240635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:39.307 BaseBdev4 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.307 [2024-11-26 22:55:18.249995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.307 [2024-11-26 22:55:18.252123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.307 [2024-11-26 22:55:18.252225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.307 [2024-11-26 22:55:18.252299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.307 [2024-11-26 22:55:18.252531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.307 [2024-11-26 22:55:18.252577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.307 [2024-11-26 22:55:18.252854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:39.307 [2024-11-26 22:55:18.253020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.307 [2024-11-26 22:55:18.253046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:39.307 [2024-11-26 22:55:18.253196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.307 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.308 "name": "raid_bdev1", 00:10:39.308 "uuid": "ef32b51b-5a38-42a7-990b-92de5c1a9880", 00:10:39.308 "strip_size_kb": 64, 00:10:39.308 "state": "online", 00:10:39.308 "raid_level": "concat", 00:10:39.308 "superblock": true, 00:10:39.308 "num_base_bdevs": 4, 00:10:39.308 "num_base_bdevs_discovered": 4, 00:10:39.308 "num_base_bdevs_operational": 4, 00:10:39.308 "base_bdevs_list": [ 00:10:39.308 { 00:10:39.308 "name": "BaseBdev1", 00:10:39.308 "uuid": "939c2b7b-4d57-5f4d-b4c4-d47ae82094fe", 00:10:39.308 "is_configured": true, 00:10:39.308 "data_offset": 2048, 00:10:39.308 "data_size": 63488 00:10:39.308 }, 00:10:39.308 { 00:10:39.308 "name": "BaseBdev2", 00:10:39.308 "uuid": "2ffd9aa5-a94a-5358-b8ae-c53fafa80879", 00:10:39.308 "is_configured": true, 00:10:39.308 "data_offset": 2048, 00:10:39.308 "data_size": 63488 00:10:39.308 }, 00:10:39.308 { 00:10:39.308 "name": "BaseBdev3", 00:10:39.308 "uuid": "76544d9d-f337-5024-b044-ec1f286b5e19", 00:10:39.308 "is_configured": true, 00:10:39.308 "data_offset": 2048, 00:10:39.308 "data_size": 63488 00:10:39.308 }, 00:10:39.308 { 00:10:39.308 "name": "BaseBdev4", 00:10:39.308 "uuid": "59252be4-0598-58e6-9d25-62b92f3fa65e", 00:10:39.308 "is_configured": true, 00:10:39.308 "data_offset": 2048, 00:10:39.308 "data_size": 63488 00:10:39.308 } 00:10:39.308 ] 00:10:39.308 }' 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.308 22:55:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.567 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.567 22:55:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.826 [2024-11-26 22:55:18.746586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.772 "name": "raid_bdev1", 00:10:40.772 "uuid": "ef32b51b-5a38-42a7-990b-92de5c1a9880", 00:10:40.772 "strip_size_kb": 64, 00:10:40.772 "state": "online", 00:10:40.772 "raid_level": "concat", 00:10:40.772 "superblock": true, 00:10:40.772 "num_base_bdevs": 4, 00:10:40.772 "num_base_bdevs_discovered": 4, 00:10:40.772 "num_base_bdevs_operational": 4, 00:10:40.772 "base_bdevs_list": [ 00:10:40.772 { 00:10:40.772 "name": "BaseBdev1", 00:10:40.772 "uuid": "939c2b7b-4d57-5f4d-b4c4-d47ae82094fe", 00:10:40.772 "is_configured": true, 00:10:40.772 "data_offset": 2048, 00:10:40.772 "data_size": 63488 00:10:40.772 }, 00:10:40.772 { 00:10:40.772 "name": "BaseBdev2", 00:10:40.772 "uuid": "2ffd9aa5-a94a-5358-b8ae-c53fafa80879", 00:10:40.772 "is_configured": true, 00:10:40.772 "data_offset": 2048, 00:10:40.772 "data_size": 63488 00:10:40.772 }, 00:10:40.772 { 00:10:40.772 "name": "BaseBdev3", 00:10:40.772 "uuid": "76544d9d-f337-5024-b044-ec1f286b5e19", 00:10:40.772 "is_configured": true, 00:10:40.772 "data_offset": 2048, 00:10:40.772 "data_size": 63488 00:10:40.772 }, 00:10:40.772 { 00:10:40.772 "name": "BaseBdev4", 00:10:40.772 "uuid": "59252be4-0598-58e6-9d25-62b92f3fa65e", 00:10:40.772 "is_configured": true, 00:10:40.772 "data_offset": 2048, 00:10:40.772 "data_size": 63488 00:10:40.772 } 00:10:40.772 ] 00:10:40.772 }' 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.772 22:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.056 [2024-11-26 22:55:20.138408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.056 [2024-11-26 22:55:20.138462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.056 [2024-11-26 22:55:20.141052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.056 [2024-11-26 22:55:20.141158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.056 [2024-11-26 22:55:20.141214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.056 [2024-11-26 22:55:20.141239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:41.056 { 00:10:41.056 "results": [ 00:10:41.056 { 00:10:41.056 "job": "raid_bdev1", 00:10:41.056 "core_mask": "0x1", 00:10:41.056 "workload": "randrw", 00:10:41.056 "percentage": 50, 00:10:41.056 "status": "finished", 00:10:41.056 "queue_depth": 1, 00:10:41.056 "io_size": 131072, 00:10:41.056 "runtime": 1.389704, 00:10:41.056 "iops": 14179.278465054429, 00:10:41.056 "mibps": 1772.4098081318036, 00:10:41.056 "io_failed": 1, 00:10:41.056 "io_timeout": 0, 00:10:41.056 "avg_latency_us": 98.7416973199929, 00:10:41.056 "min_latency_us": 25.994944652662774, 00:10:41.056 "max_latency_us": 1370.9265231412883 00:10:41.056 } 00:10:41.056 ], 00:10:41.056 "core_count": 1 00:10:41.056 } 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85463 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85463 ']' 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85463 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85463 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.056 killing process with pid 85463 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85463' 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85463 00:10:41.056 [2024-11-26 22:55:20.170647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.056 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85463 00:10:41.326 [2024-11-26 22:55:20.235433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MFGdlfqhQG 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:41.586 ************************************ 00:10:41.586 END TEST raid_write_error_test 00:10:41.586 ************************************ 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:41.586 00:10:41.586 real 0m3.465s 00:10:41.586 user 0m4.154s 00:10:41.586 sys 0m0.676s 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.586 22:55:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.586 22:55:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:41.586 22:55:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:41.586 22:55:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.586 22:55:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.586 22:55:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.586 ************************************ 00:10:41.586 START TEST raid_state_function_test 00:10:41.586 ************************************ 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85596 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85596' 00:10:41.586 Process raid pid: 85596 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85596 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 85596 ']' 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.586 22:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.846 [2024-11-26 22:55:20.769471] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:41.846 [2024-11-26 22:55:20.769635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.846 [2024-11-26 22:55:20.911655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.846 [2024-11-26 22:55:20.945643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.105 [2024-11-26 22:55:20.984989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.105 [2024-11-26 22:55:21.061315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.105 [2024-11-26 22:55:21.061356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 [2024-11-26 22:55:21.588606] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.673 [2024-11-26 22:55:21.588670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.673 [2024-11-26 22:55:21.588686] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.673 [2024-11-26 22:55:21.588696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.673 [2024-11-26 22:55:21.588710] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.673 [2024-11-26 22:55:21.588719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.673 [2024-11-26 22:55:21.588732] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.673 [2024-11-26 22:55:21.588741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.673 "name": "Existed_Raid", 00:10:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.673 "strip_size_kb": 0, 00:10:42.673 "state": "configuring", 00:10:42.673 "raid_level": "raid1", 00:10:42.673 "superblock": false, 00:10:42.673 "num_base_bdevs": 4, 00:10:42.673 "num_base_bdevs_discovered": 0, 00:10:42.673 "num_base_bdevs_operational": 4, 00:10:42.673 "base_bdevs_list": [ 00:10:42.673 { 00:10:42.673 "name": "BaseBdev1", 00:10:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.673 "is_configured": false, 00:10:42.673 "data_offset": 0, 00:10:42.673 "data_size": 0 00:10:42.673 }, 00:10:42.673 { 00:10:42.673 "name": "BaseBdev2", 00:10:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.673 "is_configured": false, 00:10:42.673 "data_offset": 0, 00:10:42.673 "data_size": 0 00:10:42.673 }, 00:10:42.673 { 00:10:42.673 "name": "BaseBdev3", 00:10:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.673 "is_configured": false, 00:10:42.673 "data_offset": 0, 00:10:42.673 "data_size": 0 00:10:42.673 }, 00:10:42.673 { 00:10:42.673 "name": "BaseBdev4", 00:10:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.673 "is_configured": false, 00:10:42.673 "data_offset": 0, 00:10:42.673 "data_size": 0 00:10:42.673 } 00:10:42.673 ] 00:10:42.673 }' 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.673 22:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.933 [2024-11-26 22:55:22.044614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.933 [2024-11-26 22:55:22.044657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.933 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.933 [2024-11-26 22:55:22.056639] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.933 [2024-11-26 22:55:22.056688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.933 [2024-11-26 22:55:22.056702] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.933 [2024-11-26 22:55:22.056711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.933 [2024-11-26 22:55:22.056722] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.933 [2024-11-26 22:55:22.056731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.933 [2024-11-26 22:55:22.056741] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.933 [2024-11-26 22:55:22.056750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.193 [2024-11-26 22:55:22.083798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.193 BaseBdev1 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.193 [ 00:10:43.193 { 00:10:43.193 "name": "BaseBdev1", 00:10:43.193 "aliases": [ 00:10:43.193 "87dcc0f7-f9ac-439c-adaf-b544d705c637" 00:10:43.193 ], 00:10:43.193 "product_name": "Malloc disk", 00:10:43.193 "block_size": 512, 00:10:43.193 "num_blocks": 65536, 00:10:43.193 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:43.193 "assigned_rate_limits": { 00:10:43.193 "rw_ios_per_sec": 0, 00:10:43.193 "rw_mbytes_per_sec": 0, 00:10:43.193 "r_mbytes_per_sec": 0, 00:10:43.193 "w_mbytes_per_sec": 0 00:10:43.193 }, 00:10:43.193 "claimed": true, 00:10:43.193 "claim_type": "exclusive_write", 00:10:43.193 "zoned": false, 00:10:43.193 "supported_io_types": { 00:10:43.193 "read": true, 00:10:43.193 "write": true, 00:10:43.193 "unmap": true, 00:10:43.193 "flush": true, 00:10:43.193 "reset": true, 00:10:43.193 "nvme_admin": false, 00:10:43.193 "nvme_io": false, 00:10:43.193 "nvme_io_md": false, 00:10:43.193 "write_zeroes": true, 00:10:43.193 "zcopy": true, 00:10:43.193 "get_zone_info": false, 00:10:43.193 "zone_management": false, 00:10:43.193 "zone_append": false, 00:10:43.193 "compare": false, 00:10:43.193 "compare_and_write": false, 00:10:43.193 "abort": true, 00:10:43.193 "seek_hole": false, 00:10:43.193 "seek_data": false, 00:10:43.193 "copy": true, 00:10:43.193 "nvme_iov_md": false 00:10:43.193 }, 00:10:43.193 "memory_domains": [ 00:10:43.193 { 00:10:43.193 "dma_device_id": "system", 00:10:43.193 "dma_device_type": 1 00:10:43.193 }, 00:10:43.193 { 00:10:43.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.193 "dma_device_type": 2 00:10:43.193 } 00:10:43.193 ], 00:10:43.193 "driver_specific": {} 00:10:43.193 } 00:10:43.193 ] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.193 "name": "Existed_Raid", 00:10:43.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.193 "strip_size_kb": 0, 00:10:43.193 "state": "configuring", 00:10:43.193 "raid_level": "raid1", 00:10:43.193 "superblock": false, 00:10:43.193 "num_base_bdevs": 4, 00:10:43.193 "num_base_bdevs_discovered": 1, 00:10:43.193 "num_base_bdevs_operational": 4, 00:10:43.193 "base_bdevs_list": [ 00:10:43.193 { 00:10:43.193 "name": "BaseBdev1", 00:10:43.193 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:43.193 "is_configured": true, 00:10:43.193 "data_offset": 0, 00:10:43.193 "data_size": 65536 00:10:43.193 }, 00:10:43.193 { 00:10:43.193 "name": "BaseBdev2", 00:10:43.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.193 "is_configured": false, 00:10:43.193 "data_offset": 0, 00:10:43.193 "data_size": 0 00:10:43.193 }, 00:10:43.193 { 00:10:43.193 "name": "BaseBdev3", 00:10:43.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.193 "is_configured": false, 00:10:43.193 "data_offset": 0, 00:10:43.193 "data_size": 0 00:10:43.193 }, 00:10:43.193 { 00:10:43.193 "name": "BaseBdev4", 00:10:43.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.193 "is_configured": false, 00:10:43.193 "data_offset": 0, 00:10:43.193 "data_size": 0 00:10:43.193 } 00:10:43.193 ] 00:10:43.193 }' 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.193 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 [2024-11-26 22:55:22.539959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.454 [2024-11-26 22:55:22.540030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 [2024-11-26 22:55:22.551997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.454 [2024-11-26 22:55:22.554199] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.454 [2024-11-26 22:55:22.554245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.454 [2024-11-26 22:55:22.554271] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.454 [2024-11-26 22:55:22.554281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.454 [2024-11-26 22:55:22.554291] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.454 [2024-11-26 22:55:22.554307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.454 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.714 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.714 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.714 "name": "Existed_Raid", 00:10:43.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.714 "strip_size_kb": 0, 00:10:43.714 "state": "configuring", 00:10:43.714 "raid_level": "raid1", 00:10:43.714 "superblock": false, 00:10:43.714 "num_base_bdevs": 4, 00:10:43.714 "num_base_bdevs_discovered": 1, 00:10:43.714 "num_base_bdevs_operational": 4, 00:10:43.714 "base_bdevs_list": [ 00:10:43.714 { 00:10:43.714 "name": "BaseBdev1", 00:10:43.714 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:43.714 "is_configured": true, 00:10:43.714 "data_offset": 0, 00:10:43.714 "data_size": 65536 00:10:43.714 }, 00:10:43.714 { 00:10:43.714 "name": "BaseBdev2", 00:10:43.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.714 "is_configured": false, 00:10:43.714 "data_offset": 0, 00:10:43.714 "data_size": 0 00:10:43.714 }, 00:10:43.714 { 00:10:43.714 "name": "BaseBdev3", 00:10:43.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.714 "is_configured": false, 00:10:43.714 "data_offset": 0, 00:10:43.714 "data_size": 0 00:10:43.714 }, 00:10:43.714 { 00:10:43.714 "name": "BaseBdev4", 00:10:43.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.714 "is_configured": false, 00:10:43.714 "data_offset": 0, 00:10:43.714 "data_size": 0 00:10:43.714 } 00:10:43.714 ] 00:10:43.714 }' 00:10:43.714 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.714 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.974 22:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.974 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.974 22:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.974 [2024-11-26 22:55:23.016998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.974 BaseBdev2 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.974 [ 00:10:43.974 { 00:10:43.974 "name": "BaseBdev2", 00:10:43.974 "aliases": [ 00:10:43.974 "d7dcb7c7-1476-4104-9313-584eb1d3d01d" 00:10:43.974 ], 00:10:43.974 "product_name": "Malloc disk", 00:10:43.974 "block_size": 512, 00:10:43.974 "num_blocks": 65536, 00:10:43.974 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:43.974 "assigned_rate_limits": { 00:10:43.974 "rw_ios_per_sec": 0, 00:10:43.974 "rw_mbytes_per_sec": 0, 00:10:43.974 "r_mbytes_per_sec": 0, 00:10:43.974 "w_mbytes_per_sec": 0 00:10:43.974 }, 00:10:43.974 "claimed": true, 00:10:43.974 "claim_type": "exclusive_write", 00:10:43.974 "zoned": false, 00:10:43.974 "supported_io_types": { 00:10:43.974 "read": true, 00:10:43.974 "write": true, 00:10:43.974 "unmap": true, 00:10:43.974 "flush": true, 00:10:43.974 "reset": true, 00:10:43.974 "nvme_admin": false, 00:10:43.974 "nvme_io": false, 00:10:43.974 "nvme_io_md": false, 00:10:43.974 "write_zeroes": true, 00:10:43.974 "zcopy": true, 00:10:43.974 "get_zone_info": false, 00:10:43.974 "zone_management": false, 00:10:43.974 "zone_append": false, 00:10:43.974 "compare": false, 00:10:43.974 "compare_and_write": false, 00:10:43.974 "abort": true, 00:10:43.974 "seek_hole": false, 00:10:43.974 "seek_data": false, 00:10:43.974 "copy": true, 00:10:43.974 "nvme_iov_md": false 00:10:43.974 }, 00:10:43.974 "memory_domains": [ 00:10:43.974 { 00:10:43.974 "dma_device_id": "system", 00:10:43.974 "dma_device_type": 1 00:10:43.974 }, 00:10:43.974 { 00:10:43.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.974 "dma_device_type": 2 00:10:43.974 } 00:10:43.974 ], 00:10:43.974 "driver_specific": {} 00:10:43.974 } 00:10:43.974 ] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.974 "name": "Existed_Raid", 00:10:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.974 "strip_size_kb": 0, 00:10:43.974 "state": "configuring", 00:10:43.974 "raid_level": "raid1", 00:10:43.974 "superblock": false, 00:10:43.974 "num_base_bdevs": 4, 00:10:43.974 "num_base_bdevs_discovered": 2, 00:10:43.974 "num_base_bdevs_operational": 4, 00:10:43.974 "base_bdevs_list": [ 00:10:43.974 { 00:10:43.974 "name": "BaseBdev1", 00:10:43.974 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:43.974 "is_configured": true, 00:10:43.974 "data_offset": 0, 00:10:43.974 "data_size": 65536 00:10:43.974 }, 00:10:43.974 { 00:10:43.974 "name": "BaseBdev2", 00:10:43.974 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:43.974 "is_configured": true, 00:10:43.974 "data_offset": 0, 00:10:43.974 "data_size": 65536 00:10:43.974 }, 00:10:43.974 { 00:10:43.974 "name": "BaseBdev3", 00:10:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.974 "is_configured": false, 00:10:43.974 "data_offset": 0, 00:10:43.974 "data_size": 0 00:10:43.974 }, 00:10:43.974 { 00:10:43.974 "name": "BaseBdev4", 00:10:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.974 "is_configured": false, 00:10:43.974 "data_offset": 0, 00:10:43.974 "data_size": 0 00:10:43.974 } 00:10:43.974 ] 00:10:43.974 }' 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.974 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.545 [2024-11-26 22:55:23.497472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.545 BaseBdev3 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.545 [ 00:10:44.545 { 00:10:44.545 "name": "BaseBdev3", 00:10:44.545 "aliases": [ 00:10:44.545 "ba4e84a1-e697-4631-bdc1-98c41e9895cf" 00:10:44.545 ], 00:10:44.545 "product_name": "Malloc disk", 00:10:44.545 "block_size": 512, 00:10:44.545 "num_blocks": 65536, 00:10:44.545 "uuid": "ba4e84a1-e697-4631-bdc1-98c41e9895cf", 00:10:44.545 "assigned_rate_limits": { 00:10:44.545 "rw_ios_per_sec": 0, 00:10:44.545 "rw_mbytes_per_sec": 0, 00:10:44.545 "r_mbytes_per_sec": 0, 00:10:44.545 "w_mbytes_per_sec": 0 00:10:44.545 }, 00:10:44.545 "claimed": true, 00:10:44.545 "claim_type": "exclusive_write", 00:10:44.545 "zoned": false, 00:10:44.545 "supported_io_types": { 00:10:44.545 "read": true, 00:10:44.545 "write": true, 00:10:44.545 "unmap": true, 00:10:44.545 "flush": true, 00:10:44.545 "reset": true, 00:10:44.545 "nvme_admin": false, 00:10:44.545 "nvme_io": false, 00:10:44.545 "nvme_io_md": false, 00:10:44.545 "write_zeroes": true, 00:10:44.545 "zcopy": true, 00:10:44.545 "get_zone_info": false, 00:10:44.545 "zone_management": false, 00:10:44.545 "zone_append": false, 00:10:44.545 "compare": false, 00:10:44.545 "compare_and_write": false, 00:10:44.545 "abort": true, 00:10:44.545 "seek_hole": false, 00:10:44.545 "seek_data": false, 00:10:44.545 "copy": true, 00:10:44.545 "nvme_iov_md": false 00:10:44.545 }, 00:10:44.545 "memory_domains": [ 00:10:44.545 { 00:10:44.545 "dma_device_id": "system", 00:10:44.545 "dma_device_type": 1 00:10:44.545 }, 00:10:44.545 { 00:10:44.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.545 "dma_device_type": 2 00:10:44.545 } 00:10:44.545 ], 00:10:44.545 "driver_specific": {} 00:10:44.545 } 00:10:44.545 ] 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.545 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.546 "name": "Existed_Raid", 00:10:44.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.546 "strip_size_kb": 0, 00:10:44.546 "state": "configuring", 00:10:44.546 "raid_level": "raid1", 00:10:44.546 "superblock": false, 00:10:44.546 "num_base_bdevs": 4, 00:10:44.546 "num_base_bdevs_discovered": 3, 00:10:44.546 "num_base_bdevs_operational": 4, 00:10:44.546 "base_bdevs_list": [ 00:10:44.546 { 00:10:44.546 "name": "BaseBdev1", 00:10:44.546 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:44.546 "is_configured": true, 00:10:44.546 "data_offset": 0, 00:10:44.546 "data_size": 65536 00:10:44.546 }, 00:10:44.546 { 00:10:44.546 "name": "BaseBdev2", 00:10:44.546 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:44.546 "is_configured": true, 00:10:44.546 "data_offset": 0, 00:10:44.546 "data_size": 65536 00:10:44.546 }, 00:10:44.546 { 00:10:44.546 "name": "BaseBdev3", 00:10:44.546 "uuid": "ba4e84a1-e697-4631-bdc1-98c41e9895cf", 00:10:44.546 "is_configured": true, 00:10:44.546 "data_offset": 0, 00:10:44.546 "data_size": 65536 00:10:44.546 }, 00:10:44.546 { 00:10:44.546 "name": "BaseBdev4", 00:10:44.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.546 "is_configured": false, 00:10:44.546 "data_offset": 0, 00:10:44.546 "data_size": 0 00:10:44.546 } 00:10:44.546 ] 00:10:44.546 }' 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.546 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.115 [2024-11-26 22:55:23.974536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.115 [2024-11-26 22:55:23.974591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:45.115 [2024-11-26 22:55:23.974604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:45.115 [2024-11-26 22:55:23.974960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:45.115 [2024-11-26 22:55:23.975177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:45.115 [2024-11-26 22:55:23.975197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:45.115 [2024-11-26 22:55:23.975486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.115 BaseBdev4 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.115 22:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.115 [ 00:10:45.115 { 00:10:45.115 "name": "BaseBdev4", 00:10:45.115 "aliases": [ 00:10:45.115 "5db21331-992c-4ddb-81d3-80b3e4a93108" 00:10:45.115 ], 00:10:45.115 "product_name": "Malloc disk", 00:10:45.115 "block_size": 512, 00:10:45.115 "num_blocks": 65536, 00:10:45.115 "uuid": "5db21331-992c-4ddb-81d3-80b3e4a93108", 00:10:45.115 "assigned_rate_limits": { 00:10:45.115 "rw_ios_per_sec": 0, 00:10:45.115 "rw_mbytes_per_sec": 0, 00:10:45.115 "r_mbytes_per_sec": 0, 00:10:45.115 "w_mbytes_per_sec": 0 00:10:45.116 }, 00:10:45.116 "claimed": true, 00:10:45.116 "claim_type": "exclusive_write", 00:10:45.116 "zoned": false, 00:10:45.116 "supported_io_types": { 00:10:45.116 "read": true, 00:10:45.116 "write": true, 00:10:45.116 "unmap": true, 00:10:45.116 "flush": true, 00:10:45.116 "reset": true, 00:10:45.116 "nvme_admin": false, 00:10:45.116 "nvme_io": false, 00:10:45.116 "nvme_io_md": false, 00:10:45.116 "write_zeroes": true, 00:10:45.116 "zcopy": true, 00:10:45.116 "get_zone_info": false, 00:10:45.116 "zone_management": false, 00:10:45.116 "zone_append": false, 00:10:45.116 "compare": false, 00:10:45.116 "compare_and_write": false, 00:10:45.116 "abort": true, 00:10:45.116 "seek_hole": false, 00:10:45.116 "seek_data": false, 00:10:45.116 "copy": true, 00:10:45.116 "nvme_iov_md": false 00:10:45.116 }, 00:10:45.116 "memory_domains": [ 00:10:45.116 { 00:10:45.116 "dma_device_id": "system", 00:10:45.116 "dma_device_type": 1 00:10:45.116 }, 00:10:45.116 { 00:10:45.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.116 "dma_device_type": 2 00:10:45.116 } 00:10:45.116 ], 00:10:45.116 "driver_specific": {} 00:10:45.116 } 00:10:45.116 ] 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.116 "name": "Existed_Raid", 00:10:45.116 "uuid": "1d5066aa-d882-480b-9b7a-3b756b52defd", 00:10:45.116 "strip_size_kb": 0, 00:10:45.116 "state": "online", 00:10:45.116 "raid_level": "raid1", 00:10:45.116 "superblock": false, 00:10:45.116 "num_base_bdevs": 4, 00:10:45.116 "num_base_bdevs_discovered": 4, 00:10:45.116 "num_base_bdevs_operational": 4, 00:10:45.116 "base_bdevs_list": [ 00:10:45.116 { 00:10:45.116 "name": "BaseBdev1", 00:10:45.116 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:45.116 "is_configured": true, 00:10:45.116 "data_offset": 0, 00:10:45.116 "data_size": 65536 00:10:45.116 }, 00:10:45.116 { 00:10:45.116 "name": "BaseBdev2", 00:10:45.116 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:45.116 "is_configured": true, 00:10:45.116 "data_offset": 0, 00:10:45.116 "data_size": 65536 00:10:45.116 }, 00:10:45.116 { 00:10:45.116 "name": "BaseBdev3", 00:10:45.116 "uuid": "ba4e84a1-e697-4631-bdc1-98c41e9895cf", 00:10:45.116 "is_configured": true, 00:10:45.116 "data_offset": 0, 00:10:45.116 "data_size": 65536 00:10:45.116 }, 00:10:45.116 { 00:10:45.116 "name": "BaseBdev4", 00:10:45.116 "uuid": "5db21331-992c-4ddb-81d3-80b3e4a93108", 00:10:45.116 "is_configured": true, 00:10:45.116 "data_offset": 0, 00:10:45.116 "data_size": 65536 00:10:45.116 } 00:10:45.116 ] 00:10:45.116 }' 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.116 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.376 [2024-11-26 22:55:24.459005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.376 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.376 "name": "Existed_Raid", 00:10:45.376 "aliases": [ 00:10:45.376 "1d5066aa-d882-480b-9b7a-3b756b52defd" 00:10:45.376 ], 00:10:45.376 "product_name": "Raid Volume", 00:10:45.376 "block_size": 512, 00:10:45.376 "num_blocks": 65536, 00:10:45.376 "uuid": "1d5066aa-d882-480b-9b7a-3b756b52defd", 00:10:45.376 "assigned_rate_limits": { 00:10:45.376 "rw_ios_per_sec": 0, 00:10:45.376 "rw_mbytes_per_sec": 0, 00:10:45.376 "r_mbytes_per_sec": 0, 00:10:45.376 "w_mbytes_per_sec": 0 00:10:45.376 }, 00:10:45.376 "claimed": false, 00:10:45.376 "zoned": false, 00:10:45.376 "supported_io_types": { 00:10:45.376 "read": true, 00:10:45.376 "write": true, 00:10:45.376 "unmap": false, 00:10:45.376 "flush": false, 00:10:45.376 "reset": true, 00:10:45.376 "nvme_admin": false, 00:10:45.376 "nvme_io": false, 00:10:45.376 "nvme_io_md": false, 00:10:45.376 "write_zeroes": true, 00:10:45.376 "zcopy": false, 00:10:45.376 "get_zone_info": false, 00:10:45.376 "zone_management": false, 00:10:45.376 "zone_append": false, 00:10:45.376 "compare": false, 00:10:45.376 "compare_and_write": false, 00:10:45.376 "abort": false, 00:10:45.376 "seek_hole": false, 00:10:45.376 "seek_data": false, 00:10:45.376 "copy": false, 00:10:45.376 "nvme_iov_md": false 00:10:45.376 }, 00:10:45.376 "memory_domains": [ 00:10:45.376 { 00:10:45.376 "dma_device_id": "system", 00:10:45.376 "dma_device_type": 1 00:10:45.376 }, 00:10:45.376 { 00:10:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.376 "dma_device_type": 2 00:10:45.376 }, 00:10:45.376 { 00:10:45.377 "dma_device_id": "system", 00:10:45.377 "dma_device_type": 1 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.377 "dma_device_type": 2 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "dma_device_id": "system", 00:10:45.377 "dma_device_type": 1 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.377 "dma_device_type": 2 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "dma_device_id": "system", 00:10:45.377 "dma_device_type": 1 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.377 "dma_device_type": 2 00:10:45.377 } 00:10:45.377 ], 00:10:45.377 "driver_specific": { 00:10:45.377 "raid": { 00:10:45.377 "uuid": "1d5066aa-d882-480b-9b7a-3b756b52defd", 00:10:45.377 "strip_size_kb": 0, 00:10:45.377 "state": "online", 00:10:45.377 "raid_level": "raid1", 00:10:45.377 "superblock": false, 00:10:45.377 "num_base_bdevs": 4, 00:10:45.377 "num_base_bdevs_discovered": 4, 00:10:45.377 "num_base_bdevs_operational": 4, 00:10:45.377 "base_bdevs_list": [ 00:10:45.377 { 00:10:45.377 "name": "BaseBdev1", 00:10:45.377 "uuid": "87dcc0f7-f9ac-439c-adaf-b544d705c637", 00:10:45.377 "is_configured": true, 00:10:45.377 "data_offset": 0, 00:10:45.377 "data_size": 65536 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "name": "BaseBdev2", 00:10:45.377 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:45.377 "is_configured": true, 00:10:45.377 "data_offset": 0, 00:10:45.377 "data_size": 65536 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "name": "BaseBdev3", 00:10:45.377 "uuid": "ba4e84a1-e697-4631-bdc1-98c41e9895cf", 00:10:45.377 "is_configured": true, 00:10:45.377 "data_offset": 0, 00:10:45.377 "data_size": 65536 00:10:45.377 }, 00:10:45.377 { 00:10:45.377 "name": "BaseBdev4", 00:10:45.377 "uuid": "5db21331-992c-4ddb-81d3-80b3e4a93108", 00:10:45.377 "is_configured": true, 00:10:45.377 "data_offset": 0, 00:10:45.377 "data_size": 65536 00:10:45.377 } 00:10:45.377 ] 00:10:45.377 } 00:10:45.377 } 00:10:45.377 }' 00:10:45.377 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.636 BaseBdev2 00:10:45.636 BaseBdev3 00:10:45.636 BaseBdev4' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.636 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.637 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.637 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.637 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.637 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.637 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.896 [2024-11-26 22:55:24.810828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.896 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.897 "name": "Existed_Raid", 00:10:45.897 "uuid": "1d5066aa-d882-480b-9b7a-3b756b52defd", 00:10:45.897 "strip_size_kb": 0, 00:10:45.897 "state": "online", 00:10:45.897 "raid_level": "raid1", 00:10:45.897 "superblock": false, 00:10:45.897 "num_base_bdevs": 4, 00:10:45.897 "num_base_bdevs_discovered": 3, 00:10:45.897 "num_base_bdevs_operational": 3, 00:10:45.897 "base_bdevs_list": [ 00:10:45.897 { 00:10:45.897 "name": null, 00:10:45.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.897 "is_configured": false, 00:10:45.897 "data_offset": 0, 00:10:45.897 "data_size": 65536 00:10:45.897 }, 00:10:45.897 { 00:10:45.897 "name": "BaseBdev2", 00:10:45.897 "uuid": "d7dcb7c7-1476-4104-9313-584eb1d3d01d", 00:10:45.897 "is_configured": true, 00:10:45.897 "data_offset": 0, 00:10:45.897 "data_size": 65536 00:10:45.897 }, 00:10:45.897 { 00:10:45.897 "name": "BaseBdev3", 00:10:45.897 "uuid": "ba4e84a1-e697-4631-bdc1-98c41e9895cf", 00:10:45.897 "is_configured": true, 00:10:45.897 "data_offset": 0, 00:10:45.897 "data_size": 65536 00:10:45.897 }, 00:10:45.897 { 00:10:45.897 "name": "BaseBdev4", 00:10:45.897 "uuid": "5db21331-992c-4ddb-81d3-80b3e4a93108", 00:10:45.897 "is_configured": true, 00:10:45.897 "data_offset": 0, 00:10:45.897 "data_size": 65536 00:10:45.897 } 00:10:45.897 ] 00:10:45.897 }' 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.897 22:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.156 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.414 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 [2024-11-26 22:55:25.295784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 [2024-11-26 22:55:25.376807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 [2024-11-26 22:55:25.453631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:46.415 [2024-11-26 22:55:25.453814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.415 [2024-11-26 22:55:25.475171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.415 [2024-11-26 22:55:25.475321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.415 [2024-11-26 22:55:25.475399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.415 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 BaseBdev2 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 [ 00:10:46.675 { 00:10:46.675 "name": "BaseBdev2", 00:10:46.675 "aliases": [ 00:10:46.675 "59d85b39-c20e-42c7-9bf7-c51d5e53031c" 00:10:46.675 ], 00:10:46.675 "product_name": "Malloc disk", 00:10:46.675 "block_size": 512, 00:10:46.675 "num_blocks": 65536, 00:10:46.675 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:46.675 "assigned_rate_limits": { 00:10:46.675 "rw_ios_per_sec": 0, 00:10:46.675 "rw_mbytes_per_sec": 0, 00:10:46.675 "r_mbytes_per_sec": 0, 00:10:46.675 "w_mbytes_per_sec": 0 00:10:46.675 }, 00:10:46.675 "claimed": false, 00:10:46.675 "zoned": false, 00:10:46.675 "supported_io_types": { 00:10:46.675 "read": true, 00:10:46.675 "write": true, 00:10:46.675 "unmap": true, 00:10:46.675 "flush": true, 00:10:46.675 "reset": true, 00:10:46.675 "nvme_admin": false, 00:10:46.675 "nvme_io": false, 00:10:46.675 "nvme_io_md": false, 00:10:46.675 "write_zeroes": true, 00:10:46.675 "zcopy": true, 00:10:46.675 "get_zone_info": false, 00:10:46.675 "zone_management": false, 00:10:46.675 "zone_append": false, 00:10:46.675 "compare": false, 00:10:46.675 "compare_and_write": false, 00:10:46.675 "abort": true, 00:10:46.675 "seek_hole": false, 00:10:46.675 "seek_data": false, 00:10:46.675 "copy": true, 00:10:46.675 "nvme_iov_md": false 00:10:46.675 }, 00:10:46.675 "memory_domains": [ 00:10:46.675 { 00:10:46.675 "dma_device_id": "system", 00:10:46.675 "dma_device_type": 1 00:10:46.675 }, 00:10:46.675 { 00:10:46.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.675 "dma_device_type": 2 00:10:46.675 } 00:10:46.675 ], 00:10:46.675 "driver_specific": {} 00:10:46.675 } 00:10:46.675 ] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 BaseBdev3 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.675 [ 00:10:46.675 { 00:10:46.675 "name": "BaseBdev3", 00:10:46.675 "aliases": [ 00:10:46.675 "ee133d9e-3250-4847-a294-18501a6c2569" 00:10:46.675 ], 00:10:46.675 "product_name": "Malloc disk", 00:10:46.675 "block_size": 512, 00:10:46.675 "num_blocks": 65536, 00:10:46.675 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:46.675 "assigned_rate_limits": { 00:10:46.675 "rw_ios_per_sec": 0, 00:10:46.675 "rw_mbytes_per_sec": 0, 00:10:46.675 "r_mbytes_per_sec": 0, 00:10:46.675 "w_mbytes_per_sec": 0 00:10:46.675 }, 00:10:46.675 "claimed": false, 00:10:46.675 "zoned": false, 00:10:46.675 "supported_io_types": { 00:10:46.675 "read": true, 00:10:46.675 "write": true, 00:10:46.675 "unmap": true, 00:10:46.675 "flush": true, 00:10:46.675 "reset": true, 00:10:46.675 "nvme_admin": false, 00:10:46.675 "nvme_io": false, 00:10:46.675 "nvme_io_md": false, 00:10:46.675 "write_zeroes": true, 00:10:46.675 "zcopy": true, 00:10:46.675 "get_zone_info": false, 00:10:46.675 "zone_management": false, 00:10:46.675 "zone_append": false, 00:10:46.675 "compare": false, 00:10:46.675 "compare_and_write": false, 00:10:46.675 "abort": true, 00:10:46.675 "seek_hole": false, 00:10:46.675 "seek_data": false, 00:10:46.675 "copy": true, 00:10:46.675 "nvme_iov_md": false 00:10:46.675 }, 00:10:46.675 "memory_domains": [ 00:10:46.675 { 00:10:46.675 "dma_device_id": "system", 00:10:46.675 "dma_device_type": 1 00:10:46.675 }, 00:10:46.675 { 00:10:46.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.675 "dma_device_type": 2 00:10:46.675 } 00:10:46.675 ], 00:10:46.675 "driver_specific": {} 00:10:46.675 } 00:10:46.675 ] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.675 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 BaseBdev4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 [ 00:10:46.676 { 00:10:46.676 "name": "BaseBdev4", 00:10:46.676 "aliases": [ 00:10:46.676 "7db19d24-6378-41be-aeb6-18216273cfaf" 00:10:46.676 ], 00:10:46.676 "product_name": "Malloc disk", 00:10:46.676 "block_size": 512, 00:10:46.676 "num_blocks": 65536, 00:10:46.676 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:46.676 "assigned_rate_limits": { 00:10:46.676 "rw_ios_per_sec": 0, 00:10:46.676 "rw_mbytes_per_sec": 0, 00:10:46.676 "r_mbytes_per_sec": 0, 00:10:46.676 "w_mbytes_per_sec": 0 00:10:46.676 }, 00:10:46.676 "claimed": false, 00:10:46.676 "zoned": false, 00:10:46.676 "supported_io_types": { 00:10:46.676 "read": true, 00:10:46.676 "write": true, 00:10:46.676 "unmap": true, 00:10:46.676 "flush": true, 00:10:46.676 "reset": true, 00:10:46.676 "nvme_admin": false, 00:10:46.676 "nvme_io": false, 00:10:46.676 "nvme_io_md": false, 00:10:46.676 "write_zeroes": true, 00:10:46.676 "zcopy": true, 00:10:46.676 "get_zone_info": false, 00:10:46.676 "zone_management": false, 00:10:46.676 "zone_append": false, 00:10:46.676 "compare": false, 00:10:46.676 "compare_and_write": false, 00:10:46.676 "abort": true, 00:10:46.676 "seek_hole": false, 00:10:46.676 "seek_data": false, 00:10:46.676 "copy": true, 00:10:46.676 "nvme_iov_md": false 00:10:46.676 }, 00:10:46.676 "memory_domains": [ 00:10:46.676 { 00:10:46.676 "dma_device_id": "system", 00:10:46.676 "dma_device_type": 1 00:10:46.676 }, 00:10:46.676 { 00:10:46.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.676 "dma_device_type": 2 00:10:46.676 } 00:10:46.676 ], 00:10:46.676 "driver_specific": {} 00:10:46.676 } 00:10:46.676 ] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 [2024-11-26 22:55:25.711125] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.676 [2024-11-26 22:55:25.711231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.676 [2024-11-26 22:55:25.711324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.676 [2024-11-26 22:55:25.713460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.676 [2024-11-26 22:55:25.713574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.676 "name": "Existed_Raid", 00:10:46.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.676 "strip_size_kb": 0, 00:10:46.676 "state": "configuring", 00:10:46.676 "raid_level": "raid1", 00:10:46.676 "superblock": false, 00:10:46.676 "num_base_bdevs": 4, 00:10:46.676 "num_base_bdevs_discovered": 3, 00:10:46.676 "num_base_bdevs_operational": 4, 00:10:46.676 "base_bdevs_list": [ 00:10:46.676 { 00:10:46.676 "name": "BaseBdev1", 00:10:46.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.676 "is_configured": false, 00:10:46.676 "data_offset": 0, 00:10:46.676 "data_size": 0 00:10:46.676 }, 00:10:46.676 { 00:10:46.676 "name": "BaseBdev2", 00:10:46.676 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:46.676 "is_configured": true, 00:10:46.676 "data_offset": 0, 00:10:46.676 "data_size": 65536 00:10:46.676 }, 00:10:46.676 { 00:10:46.676 "name": "BaseBdev3", 00:10:46.676 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:46.676 "is_configured": true, 00:10:46.676 "data_offset": 0, 00:10:46.676 "data_size": 65536 00:10:46.676 }, 00:10:46.676 { 00:10:46.676 "name": "BaseBdev4", 00:10:46.676 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:46.676 "is_configured": true, 00:10:46.676 "data_offset": 0, 00:10:46.676 "data_size": 65536 00:10:46.676 } 00:10:46.676 ] 00:10:46.676 }' 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.676 22:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.243 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:47.243 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.243 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.243 [2024-11-26 22:55:26.155203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.244 "name": "Existed_Raid", 00:10:47.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.244 "strip_size_kb": 0, 00:10:47.244 "state": "configuring", 00:10:47.244 "raid_level": "raid1", 00:10:47.244 "superblock": false, 00:10:47.244 "num_base_bdevs": 4, 00:10:47.244 "num_base_bdevs_discovered": 2, 00:10:47.244 "num_base_bdevs_operational": 4, 00:10:47.244 "base_bdevs_list": [ 00:10:47.244 { 00:10:47.244 "name": "BaseBdev1", 00:10:47.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.244 "is_configured": false, 00:10:47.244 "data_offset": 0, 00:10:47.244 "data_size": 0 00:10:47.244 }, 00:10:47.244 { 00:10:47.244 "name": null, 00:10:47.244 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:47.244 "is_configured": false, 00:10:47.244 "data_offset": 0, 00:10:47.244 "data_size": 65536 00:10:47.244 }, 00:10:47.244 { 00:10:47.244 "name": "BaseBdev3", 00:10:47.244 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:47.244 "is_configured": true, 00:10:47.244 "data_offset": 0, 00:10:47.244 "data_size": 65536 00:10:47.244 }, 00:10:47.244 { 00:10:47.244 "name": "BaseBdev4", 00:10:47.244 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:47.244 "is_configured": true, 00:10:47.244 "data_offset": 0, 00:10:47.244 "data_size": 65536 00:10:47.244 } 00:10:47.244 ] 00:10:47.244 }' 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.244 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.503 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.503 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.503 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.503 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.503 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 [2024-11-26 22:55:26.648336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.762 BaseBdev1 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 [ 00:10:47.762 { 00:10:47.762 "name": "BaseBdev1", 00:10:47.762 "aliases": [ 00:10:47.762 "e81fb55d-69f5-4dc5-b585-e9ebc565b88d" 00:10:47.762 ], 00:10:47.762 "product_name": "Malloc disk", 00:10:47.762 "block_size": 512, 00:10:47.762 "num_blocks": 65536, 00:10:47.762 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:47.762 "assigned_rate_limits": { 00:10:47.762 "rw_ios_per_sec": 0, 00:10:47.762 "rw_mbytes_per_sec": 0, 00:10:47.762 "r_mbytes_per_sec": 0, 00:10:47.762 "w_mbytes_per_sec": 0 00:10:47.762 }, 00:10:47.762 "claimed": true, 00:10:47.762 "claim_type": "exclusive_write", 00:10:47.762 "zoned": false, 00:10:47.762 "supported_io_types": { 00:10:47.762 "read": true, 00:10:47.762 "write": true, 00:10:47.762 "unmap": true, 00:10:47.762 "flush": true, 00:10:47.762 "reset": true, 00:10:47.762 "nvme_admin": false, 00:10:47.762 "nvme_io": false, 00:10:47.762 "nvme_io_md": false, 00:10:47.762 "write_zeroes": true, 00:10:47.762 "zcopy": true, 00:10:47.762 "get_zone_info": false, 00:10:47.762 "zone_management": false, 00:10:47.762 "zone_append": false, 00:10:47.762 "compare": false, 00:10:47.762 "compare_and_write": false, 00:10:47.762 "abort": true, 00:10:47.762 "seek_hole": false, 00:10:47.762 "seek_data": false, 00:10:47.762 "copy": true, 00:10:47.762 "nvme_iov_md": false 00:10:47.762 }, 00:10:47.762 "memory_domains": [ 00:10:47.762 { 00:10:47.762 "dma_device_id": "system", 00:10:47.762 "dma_device_type": 1 00:10:47.762 }, 00:10:47.762 { 00:10:47.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.762 "dma_device_type": 2 00:10:47.762 } 00:10:47.762 ], 00:10:47.762 "driver_specific": {} 00:10:47.762 } 00:10:47.762 ] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.762 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.762 "name": "Existed_Raid", 00:10:47.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.762 "strip_size_kb": 0, 00:10:47.762 "state": "configuring", 00:10:47.762 "raid_level": "raid1", 00:10:47.762 "superblock": false, 00:10:47.762 "num_base_bdevs": 4, 00:10:47.762 "num_base_bdevs_discovered": 3, 00:10:47.762 "num_base_bdevs_operational": 4, 00:10:47.762 "base_bdevs_list": [ 00:10:47.762 { 00:10:47.762 "name": "BaseBdev1", 00:10:47.762 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:47.762 "is_configured": true, 00:10:47.762 "data_offset": 0, 00:10:47.762 "data_size": 65536 00:10:47.762 }, 00:10:47.762 { 00:10:47.763 "name": null, 00:10:47.763 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:47.763 "is_configured": false, 00:10:47.763 "data_offset": 0, 00:10:47.763 "data_size": 65536 00:10:47.763 }, 00:10:47.763 { 00:10:47.763 "name": "BaseBdev3", 00:10:47.763 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:47.763 "is_configured": true, 00:10:47.763 "data_offset": 0, 00:10:47.763 "data_size": 65536 00:10:47.763 }, 00:10:47.763 { 00:10:47.763 "name": "BaseBdev4", 00:10:47.763 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:47.763 "is_configured": true, 00:10:47.763 "data_offset": 0, 00:10:47.763 "data_size": 65536 00:10:47.763 } 00:10:47.763 ] 00:10:47.763 }' 00:10:47.763 22:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.763 22:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.022 [2024-11-26 22:55:27.108477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.022 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.281 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.281 "name": "Existed_Raid", 00:10:48.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.281 "strip_size_kb": 0, 00:10:48.281 "state": "configuring", 00:10:48.281 "raid_level": "raid1", 00:10:48.281 "superblock": false, 00:10:48.281 "num_base_bdevs": 4, 00:10:48.281 "num_base_bdevs_discovered": 2, 00:10:48.281 "num_base_bdevs_operational": 4, 00:10:48.281 "base_bdevs_list": [ 00:10:48.281 { 00:10:48.281 "name": "BaseBdev1", 00:10:48.281 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:48.281 "is_configured": true, 00:10:48.281 "data_offset": 0, 00:10:48.281 "data_size": 65536 00:10:48.281 }, 00:10:48.281 { 00:10:48.281 "name": null, 00:10:48.281 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:48.281 "is_configured": false, 00:10:48.281 "data_offset": 0, 00:10:48.281 "data_size": 65536 00:10:48.281 }, 00:10:48.281 { 00:10:48.281 "name": null, 00:10:48.281 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:48.281 "is_configured": false, 00:10:48.281 "data_offset": 0, 00:10:48.281 "data_size": 65536 00:10:48.281 }, 00:10:48.281 { 00:10:48.281 "name": "BaseBdev4", 00:10:48.281 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:48.281 "is_configured": true, 00:10:48.281 "data_offset": 0, 00:10:48.281 "data_size": 65536 00:10:48.281 } 00:10:48.281 ] 00:10:48.281 }' 00:10:48.281 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.281 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.540 [2024-11-26 22:55:27.600699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.540 "name": "Existed_Raid", 00:10:48.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.540 "strip_size_kb": 0, 00:10:48.540 "state": "configuring", 00:10:48.540 "raid_level": "raid1", 00:10:48.540 "superblock": false, 00:10:48.540 "num_base_bdevs": 4, 00:10:48.540 "num_base_bdevs_discovered": 3, 00:10:48.540 "num_base_bdevs_operational": 4, 00:10:48.540 "base_bdevs_list": [ 00:10:48.540 { 00:10:48.540 "name": "BaseBdev1", 00:10:48.540 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:48.540 "is_configured": true, 00:10:48.540 "data_offset": 0, 00:10:48.540 "data_size": 65536 00:10:48.540 }, 00:10:48.540 { 00:10:48.540 "name": null, 00:10:48.540 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:48.540 "is_configured": false, 00:10:48.540 "data_offset": 0, 00:10:48.540 "data_size": 65536 00:10:48.540 }, 00:10:48.540 { 00:10:48.540 "name": "BaseBdev3", 00:10:48.540 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:48.540 "is_configured": true, 00:10:48.540 "data_offset": 0, 00:10:48.540 "data_size": 65536 00:10:48.540 }, 00:10:48.540 { 00:10:48.540 "name": "BaseBdev4", 00:10:48.540 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:48.540 "is_configured": true, 00:10:48.540 "data_offset": 0, 00:10:48.540 "data_size": 65536 00:10:48.540 } 00:10:48.540 ] 00:10:48.540 }' 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.540 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.108 [2024-11-26 22:55:28.104852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.108 "name": "Existed_Raid", 00:10:49.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.108 "strip_size_kb": 0, 00:10:49.108 "state": "configuring", 00:10:49.108 "raid_level": "raid1", 00:10:49.108 "superblock": false, 00:10:49.108 "num_base_bdevs": 4, 00:10:49.108 "num_base_bdevs_discovered": 2, 00:10:49.108 "num_base_bdevs_operational": 4, 00:10:49.108 "base_bdevs_list": [ 00:10:49.108 { 00:10:49.108 "name": null, 00:10:49.108 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:49.108 "is_configured": false, 00:10:49.108 "data_offset": 0, 00:10:49.108 "data_size": 65536 00:10:49.108 }, 00:10:49.108 { 00:10:49.108 "name": null, 00:10:49.108 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:49.108 "is_configured": false, 00:10:49.108 "data_offset": 0, 00:10:49.108 "data_size": 65536 00:10:49.108 }, 00:10:49.108 { 00:10:49.108 "name": "BaseBdev3", 00:10:49.108 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:49.108 "is_configured": true, 00:10:49.108 "data_offset": 0, 00:10:49.108 "data_size": 65536 00:10:49.108 }, 00:10:49.108 { 00:10:49.108 "name": "BaseBdev4", 00:10:49.108 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:49.108 "is_configured": true, 00:10:49.108 "data_offset": 0, 00:10:49.108 "data_size": 65536 00:10:49.108 } 00:10:49.108 ] 00:10:49.108 }' 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.108 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.678 [2024-11-26 22:55:28.600893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.678 "name": "Existed_Raid", 00:10:49.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.678 "strip_size_kb": 0, 00:10:49.678 "state": "configuring", 00:10:49.678 "raid_level": "raid1", 00:10:49.678 "superblock": false, 00:10:49.678 "num_base_bdevs": 4, 00:10:49.678 "num_base_bdevs_discovered": 3, 00:10:49.678 "num_base_bdevs_operational": 4, 00:10:49.678 "base_bdevs_list": [ 00:10:49.678 { 00:10:49.678 "name": null, 00:10:49.678 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:49.678 "is_configured": false, 00:10:49.678 "data_offset": 0, 00:10:49.678 "data_size": 65536 00:10:49.678 }, 00:10:49.678 { 00:10:49.678 "name": "BaseBdev2", 00:10:49.678 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:49.678 "is_configured": true, 00:10:49.678 "data_offset": 0, 00:10:49.678 "data_size": 65536 00:10:49.678 }, 00:10:49.678 { 00:10:49.678 "name": "BaseBdev3", 00:10:49.678 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:49.678 "is_configured": true, 00:10:49.678 "data_offset": 0, 00:10:49.678 "data_size": 65536 00:10:49.678 }, 00:10:49.678 { 00:10:49.678 "name": "BaseBdev4", 00:10:49.678 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:49.678 "is_configured": true, 00:10:49.678 "data_offset": 0, 00:10:49.678 "data_size": 65536 00:10:49.678 } 00:10:49.678 ] 00:10:49.678 }' 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.678 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.938 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.938 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.938 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.938 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.938 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e81fb55d-69f5-4dc5-b585-e9ebc565b88d 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.197 [2024-11-26 22:55:29.162019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.197 [2024-11-26 22:55:29.162171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.197 [2024-11-26 22:55:29.162200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:50.197 [2024-11-26 22:55:29.162563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:50.197 [2024-11-26 22:55:29.162777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.197 [2024-11-26 22:55:29.162831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:50.197 [2024-11-26 22:55:29.163097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.197 NewBaseBdev 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.197 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.197 [ 00:10:50.197 { 00:10:50.197 "name": "NewBaseBdev", 00:10:50.197 "aliases": [ 00:10:50.197 "e81fb55d-69f5-4dc5-b585-e9ebc565b88d" 00:10:50.197 ], 00:10:50.197 "product_name": "Malloc disk", 00:10:50.197 "block_size": 512, 00:10:50.197 "num_blocks": 65536, 00:10:50.197 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:50.197 "assigned_rate_limits": { 00:10:50.197 "rw_ios_per_sec": 0, 00:10:50.197 "rw_mbytes_per_sec": 0, 00:10:50.197 "r_mbytes_per_sec": 0, 00:10:50.197 "w_mbytes_per_sec": 0 00:10:50.197 }, 00:10:50.197 "claimed": true, 00:10:50.197 "claim_type": "exclusive_write", 00:10:50.197 "zoned": false, 00:10:50.197 "supported_io_types": { 00:10:50.197 "read": true, 00:10:50.197 "write": true, 00:10:50.198 "unmap": true, 00:10:50.198 "flush": true, 00:10:50.198 "reset": true, 00:10:50.198 "nvme_admin": false, 00:10:50.198 "nvme_io": false, 00:10:50.198 "nvme_io_md": false, 00:10:50.198 "write_zeroes": true, 00:10:50.198 "zcopy": true, 00:10:50.198 "get_zone_info": false, 00:10:50.198 "zone_management": false, 00:10:50.198 "zone_append": false, 00:10:50.198 "compare": false, 00:10:50.198 "compare_and_write": false, 00:10:50.198 "abort": true, 00:10:50.198 "seek_hole": false, 00:10:50.198 "seek_data": false, 00:10:50.198 "copy": true, 00:10:50.198 "nvme_iov_md": false 00:10:50.198 }, 00:10:50.198 "memory_domains": [ 00:10:50.198 { 00:10:50.198 "dma_device_id": "system", 00:10:50.198 "dma_device_type": 1 00:10:50.198 }, 00:10:50.198 { 00:10:50.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.198 "dma_device_type": 2 00:10:50.198 } 00:10:50.198 ], 00:10:50.198 "driver_specific": {} 00:10:50.198 } 00:10:50.198 ] 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.198 "name": "Existed_Raid", 00:10:50.198 "uuid": "8f666bfa-9b44-46a9-b1af-50c3e9b6cc74", 00:10:50.198 "strip_size_kb": 0, 00:10:50.198 "state": "online", 00:10:50.198 "raid_level": "raid1", 00:10:50.198 "superblock": false, 00:10:50.198 "num_base_bdevs": 4, 00:10:50.198 "num_base_bdevs_discovered": 4, 00:10:50.198 "num_base_bdevs_operational": 4, 00:10:50.198 "base_bdevs_list": [ 00:10:50.198 { 00:10:50.198 "name": "NewBaseBdev", 00:10:50.198 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:50.198 "is_configured": true, 00:10:50.198 "data_offset": 0, 00:10:50.198 "data_size": 65536 00:10:50.198 }, 00:10:50.198 { 00:10:50.198 "name": "BaseBdev2", 00:10:50.198 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:50.198 "is_configured": true, 00:10:50.198 "data_offset": 0, 00:10:50.198 "data_size": 65536 00:10:50.198 }, 00:10:50.198 { 00:10:50.198 "name": "BaseBdev3", 00:10:50.198 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:50.198 "is_configured": true, 00:10:50.198 "data_offset": 0, 00:10:50.198 "data_size": 65536 00:10:50.198 }, 00:10:50.198 { 00:10:50.198 "name": "BaseBdev4", 00:10:50.198 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:50.198 "is_configured": true, 00:10:50.198 "data_offset": 0, 00:10:50.198 "data_size": 65536 00:10:50.198 } 00:10:50.198 ] 00:10:50.198 }' 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.198 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.457 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.716 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.716 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.717 [2024-11-26 22:55:29.590505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.717 "name": "Existed_Raid", 00:10:50.717 "aliases": [ 00:10:50.717 "8f666bfa-9b44-46a9-b1af-50c3e9b6cc74" 00:10:50.717 ], 00:10:50.717 "product_name": "Raid Volume", 00:10:50.717 "block_size": 512, 00:10:50.717 "num_blocks": 65536, 00:10:50.717 "uuid": "8f666bfa-9b44-46a9-b1af-50c3e9b6cc74", 00:10:50.717 "assigned_rate_limits": { 00:10:50.717 "rw_ios_per_sec": 0, 00:10:50.717 "rw_mbytes_per_sec": 0, 00:10:50.717 "r_mbytes_per_sec": 0, 00:10:50.717 "w_mbytes_per_sec": 0 00:10:50.717 }, 00:10:50.717 "claimed": false, 00:10:50.717 "zoned": false, 00:10:50.717 "supported_io_types": { 00:10:50.717 "read": true, 00:10:50.717 "write": true, 00:10:50.717 "unmap": false, 00:10:50.717 "flush": false, 00:10:50.717 "reset": true, 00:10:50.717 "nvme_admin": false, 00:10:50.717 "nvme_io": false, 00:10:50.717 "nvme_io_md": false, 00:10:50.717 "write_zeroes": true, 00:10:50.717 "zcopy": false, 00:10:50.717 "get_zone_info": false, 00:10:50.717 "zone_management": false, 00:10:50.717 "zone_append": false, 00:10:50.717 "compare": false, 00:10:50.717 "compare_and_write": false, 00:10:50.717 "abort": false, 00:10:50.717 "seek_hole": false, 00:10:50.717 "seek_data": false, 00:10:50.717 "copy": false, 00:10:50.717 "nvme_iov_md": false 00:10:50.717 }, 00:10:50.717 "memory_domains": [ 00:10:50.717 { 00:10:50.717 "dma_device_id": "system", 00:10:50.717 "dma_device_type": 1 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.717 "dma_device_type": 2 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "system", 00:10:50.717 "dma_device_type": 1 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.717 "dma_device_type": 2 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "system", 00:10:50.717 "dma_device_type": 1 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.717 "dma_device_type": 2 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "system", 00:10:50.717 "dma_device_type": 1 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.717 "dma_device_type": 2 00:10:50.717 } 00:10:50.717 ], 00:10:50.717 "driver_specific": { 00:10:50.717 "raid": { 00:10:50.717 "uuid": "8f666bfa-9b44-46a9-b1af-50c3e9b6cc74", 00:10:50.717 "strip_size_kb": 0, 00:10:50.717 "state": "online", 00:10:50.717 "raid_level": "raid1", 00:10:50.717 "superblock": false, 00:10:50.717 "num_base_bdevs": 4, 00:10:50.717 "num_base_bdevs_discovered": 4, 00:10:50.717 "num_base_bdevs_operational": 4, 00:10:50.717 "base_bdevs_list": [ 00:10:50.717 { 00:10:50.717 "name": "NewBaseBdev", 00:10:50.717 "uuid": "e81fb55d-69f5-4dc5-b585-e9ebc565b88d", 00:10:50.717 "is_configured": true, 00:10:50.717 "data_offset": 0, 00:10:50.717 "data_size": 65536 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "name": "BaseBdev2", 00:10:50.717 "uuid": "59d85b39-c20e-42c7-9bf7-c51d5e53031c", 00:10:50.717 "is_configured": true, 00:10:50.717 "data_offset": 0, 00:10:50.717 "data_size": 65536 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "name": "BaseBdev3", 00:10:50.717 "uuid": "ee133d9e-3250-4847-a294-18501a6c2569", 00:10:50.717 "is_configured": true, 00:10:50.717 "data_offset": 0, 00:10:50.717 "data_size": 65536 00:10:50.717 }, 00:10:50.717 { 00:10:50.717 "name": "BaseBdev4", 00:10:50.717 "uuid": "7db19d24-6378-41be-aeb6-18216273cfaf", 00:10:50.717 "is_configured": true, 00:10:50.717 "data_offset": 0, 00:10:50.717 "data_size": 65536 00:10:50.717 } 00:10:50.717 ] 00:10:50.717 } 00:10:50.717 } 00:10:50.717 }' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:50.717 BaseBdev2 00:10:50.717 BaseBdev3 00:10:50.717 BaseBdev4' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.717 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.977 [2024-11-26 22:55:29.878215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.977 [2024-11-26 22:55:29.878264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.977 [2024-11-26 22:55:29.878356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.977 [2024-11-26 22:55:29.878661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.977 [2024-11-26 22:55:29.878673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85596 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 85596 ']' 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 85596 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85596 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85596' 00:10:50.977 killing process with pid 85596 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 85596 00:10:50.977 [2024-11-26 22:55:29.926790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.977 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 85596 00:10:50.977 [2024-11-26 22:55:30.002326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.237 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:51.237 00:10:51.237 real 0m9.670s 00:10:51.237 user 0m16.197s 00:10:51.237 sys 0m2.161s 00:10:51.237 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.237 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.237 ************************************ 00:10:51.237 END TEST raid_state_function_test 00:10:51.237 ************************************ 00:10:51.498 22:55:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:51.498 22:55:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.498 22:55:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.498 22:55:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.498 ************************************ 00:10:51.498 START TEST raid_state_function_test_sb 00:10:51.498 ************************************ 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:51.498 Process raid pid: 86245 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86245 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86245' 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86245 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86245 ']' 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.498 22:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.498 [2024-11-26 22:55:30.506520] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:51.498 [2024-11-26 22:55:30.506718] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.758 [2024-11-26 22:55:30.644757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:51.758 [2024-11-26 22:55:30.683980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.758 [2024-11-26 22:55:30.724829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.758 [2024-11-26 22:55:30.801313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.758 [2024-11-26 22:55:30.801481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.327 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.327 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:52.327 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.327 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.327 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.327 [2024-11-26 22:55:31.344689] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.327 [2024-11-26 22:55:31.344755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.327 [2024-11-26 22:55:31.344772] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.327 [2024-11-26 22:55:31.344782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.327 [2024-11-26 22:55:31.344797] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.328 [2024-11-26 22:55:31.344805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.328 [2024-11-26 22:55:31.344818] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.328 [2024-11-26 22:55:31.344827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.328 "name": "Existed_Raid", 00:10:52.328 "uuid": "d2f69fda-4be1-457f-984c-d3192a51c728", 00:10:52.328 "strip_size_kb": 0, 00:10:52.328 "state": "configuring", 00:10:52.328 "raid_level": "raid1", 00:10:52.328 "superblock": true, 00:10:52.328 "num_base_bdevs": 4, 00:10:52.328 "num_base_bdevs_discovered": 0, 00:10:52.328 "num_base_bdevs_operational": 4, 00:10:52.328 "base_bdevs_list": [ 00:10:52.328 { 00:10:52.328 "name": "BaseBdev1", 00:10:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.328 "is_configured": false, 00:10:52.328 "data_offset": 0, 00:10:52.328 "data_size": 0 00:10:52.328 }, 00:10:52.328 { 00:10:52.328 "name": "BaseBdev2", 00:10:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.328 "is_configured": false, 00:10:52.328 "data_offset": 0, 00:10:52.328 "data_size": 0 00:10:52.328 }, 00:10:52.328 { 00:10:52.328 "name": "BaseBdev3", 00:10:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.328 "is_configured": false, 00:10:52.328 "data_offset": 0, 00:10:52.328 "data_size": 0 00:10:52.328 }, 00:10:52.328 { 00:10:52.328 "name": "BaseBdev4", 00:10:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.328 "is_configured": false, 00:10:52.328 "data_offset": 0, 00:10:52.328 "data_size": 0 00:10:52.328 } 00:10:52.328 ] 00:10:52.328 }' 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.328 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 [2024-11-26 22:55:31.836709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.898 [2024-11-26 22:55:31.836807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 [2024-11-26 22:55:31.848739] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.898 [2024-11-26 22:55:31.848785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.898 [2024-11-26 22:55:31.848799] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.898 [2024-11-26 22:55:31.848825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.898 [2024-11-26 22:55:31.848835] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.898 [2024-11-26 22:55:31.848844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.898 [2024-11-26 22:55:31.848855] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.898 [2024-11-26 22:55:31.848863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 [2024-11-26 22:55:31.875787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.898 BaseBdev1 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 [ 00:10:52.898 { 00:10:52.898 "name": "BaseBdev1", 00:10:52.898 "aliases": [ 00:10:52.898 "22e844e6-8555-468a-83ca-cf94227d92ef" 00:10:52.898 ], 00:10:52.898 "product_name": "Malloc disk", 00:10:52.898 "block_size": 512, 00:10:52.898 "num_blocks": 65536, 00:10:52.898 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:52.898 "assigned_rate_limits": { 00:10:52.898 "rw_ios_per_sec": 0, 00:10:52.898 "rw_mbytes_per_sec": 0, 00:10:52.898 "r_mbytes_per_sec": 0, 00:10:52.898 "w_mbytes_per_sec": 0 00:10:52.898 }, 00:10:52.898 "claimed": true, 00:10:52.898 "claim_type": "exclusive_write", 00:10:52.898 "zoned": false, 00:10:52.898 "supported_io_types": { 00:10:52.898 "read": true, 00:10:52.898 "write": true, 00:10:52.898 "unmap": true, 00:10:52.898 "flush": true, 00:10:52.898 "reset": true, 00:10:52.898 "nvme_admin": false, 00:10:52.898 "nvme_io": false, 00:10:52.898 "nvme_io_md": false, 00:10:52.898 "write_zeroes": true, 00:10:52.898 "zcopy": true, 00:10:52.898 "get_zone_info": false, 00:10:52.898 "zone_management": false, 00:10:52.898 "zone_append": false, 00:10:52.898 "compare": false, 00:10:52.898 "compare_and_write": false, 00:10:52.898 "abort": true, 00:10:52.898 "seek_hole": false, 00:10:52.898 "seek_data": false, 00:10:52.898 "copy": true, 00:10:52.898 "nvme_iov_md": false 00:10:52.898 }, 00:10:52.898 "memory_domains": [ 00:10:52.898 { 00:10:52.898 "dma_device_id": "system", 00:10:52.898 "dma_device_type": 1 00:10:52.898 }, 00:10:52.898 { 00:10:52.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.898 "dma_device_type": 2 00:10:52.898 } 00:10:52.898 ], 00:10:52.898 "driver_specific": {} 00:10:52.898 } 00:10:52.898 ] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.898 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.898 "name": "Existed_Raid", 00:10:52.898 "uuid": "4ae2c716-556b-4b48-b097-11ddcf5d7b35", 00:10:52.898 "strip_size_kb": 0, 00:10:52.898 "state": "configuring", 00:10:52.898 "raid_level": "raid1", 00:10:52.898 "superblock": true, 00:10:52.898 "num_base_bdevs": 4, 00:10:52.898 "num_base_bdevs_discovered": 1, 00:10:52.898 "num_base_bdevs_operational": 4, 00:10:52.898 "base_bdevs_list": [ 00:10:52.898 { 00:10:52.898 "name": "BaseBdev1", 00:10:52.898 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:52.898 "is_configured": true, 00:10:52.898 "data_offset": 2048, 00:10:52.898 "data_size": 63488 00:10:52.898 }, 00:10:52.898 { 00:10:52.898 "name": "BaseBdev2", 00:10:52.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.898 "is_configured": false, 00:10:52.898 "data_offset": 0, 00:10:52.898 "data_size": 0 00:10:52.898 }, 00:10:52.898 { 00:10:52.898 "name": "BaseBdev3", 00:10:52.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.898 "is_configured": false, 00:10:52.899 "data_offset": 0, 00:10:52.899 "data_size": 0 00:10:52.899 }, 00:10:52.899 { 00:10:52.899 "name": "BaseBdev4", 00:10:52.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.899 "is_configured": false, 00:10:52.899 "data_offset": 0, 00:10:52.899 "data_size": 0 00:10:52.899 } 00:10:52.899 ] 00:10:52.899 }' 00:10:52.899 22:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.899 22:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 [2024-11-26 22:55:32.383956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.465 [2024-11-26 22:55:32.384018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.465 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.466 [2024-11-26 22:55:32.395995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.466 [2024-11-26 22:55:32.398142] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.466 [2024-11-26 22:55:32.398226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.466 [2024-11-26 22:55:32.398299] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.466 [2024-11-26 22:55:32.398332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.466 [2024-11-26 22:55:32.398370] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.466 [2024-11-26 22:55:32.398414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.466 "name": "Existed_Raid", 00:10:53.466 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:53.466 "strip_size_kb": 0, 00:10:53.466 "state": "configuring", 00:10:53.466 "raid_level": "raid1", 00:10:53.466 "superblock": true, 00:10:53.466 "num_base_bdevs": 4, 00:10:53.466 "num_base_bdevs_discovered": 1, 00:10:53.466 "num_base_bdevs_operational": 4, 00:10:53.466 "base_bdevs_list": [ 00:10:53.466 { 00:10:53.466 "name": "BaseBdev1", 00:10:53.466 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:53.466 "is_configured": true, 00:10:53.466 "data_offset": 2048, 00:10:53.466 "data_size": 63488 00:10:53.466 }, 00:10:53.466 { 00:10:53.466 "name": "BaseBdev2", 00:10:53.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.466 "is_configured": false, 00:10:53.466 "data_offset": 0, 00:10:53.466 "data_size": 0 00:10:53.466 }, 00:10:53.466 { 00:10:53.466 "name": "BaseBdev3", 00:10:53.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.466 "is_configured": false, 00:10:53.466 "data_offset": 0, 00:10:53.466 "data_size": 0 00:10:53.466 }, 00:10:53.466 { 00:10:53.466 "name": "BaseBdev4", 00:10:53.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.466 "is_configured": false, 00:10:53.466 "data_offset": 0, 00:10:53.466 "data_size": 0 00:10:53.466 } 00:10:53.466 ] 00:10:53.466 }' 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.466 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.033 [2024-11-26 22:55:32.897020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.033 BaseBdev2 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.033 [ 00:10:54.033 { 00:10:54.033 "name": "BaseBdev2", 00:10:54.033 "aliases": [ 00:10:54.033 "705f640f-1e3d-48c7-a19d-f3759d1bd1e7" 00:10:54.033 ], 00:10:54.033 "product_name": "Malloc disk", 00:10:54.033 "block_size": 512, 00:10:54.033 "num_blocks": 65536, 00:10:54.033 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:54.033 "assigned_rate_limits": { 00:10:54.033 "rw_ios_per_sec": 0, 00:10:54.033 "rw_mbytes_per_sec": 0, 00:10:54.033 "r_mbytes_per_sec": 0, 00:10:54.033 "w_mbytes_per_sec": 0 00:10:54.033 }, 00:10:54.033 "claimed": true, 00:10:54.033 "claim_type": "exclusive_write", 00:10:54.033 "zoned": false, 00:10:54.033 "supported_io_types": { 00:10:54.033 "read": true, 00:10:54.033 "write": true, 00:10:54.033 "unmap": true, 00:10:54.033 "flush": true, 00:10:54.033 "reset": true, 00:10:54.033 "nvme_admin": false, 00:10:54.033 "nvme_io": false, 00:10:54.033 "nvme_io_md": false, 00:10:54.033 "write_zeroes": true, 00:10:54.033 "zcopy": true, 00:10:54.033 "get_zone_info": false, 00:10:54.033 "zone_management": false, 00:10:54.033 "zone_append": false, 00:10:54.033 "compare": false, 00:10:54.033 "compare_and_write": false, 00:10:54.033 "abort": true, 00:10:54.033 "seek_hole": false, 00:10:54.033 "seek_data": false, 00:10:54.033 "copy": true, 00:10:54.033 "nvme_iov_md": false 00:10:54.033 }, 00:10:54.033 "memory_domains": [ 00:10:54.033 { 00:10:54.033 "dma_device_id": "system", 00:10:54.033 "dma_device_type": 1 00:10:54.033 }, 00:10:54.033 { 00:10:54.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.033 "dma_device_type": 2 00:10:54.033 } 00:10:54.033 ], 00:10:54.033 "driver_specific": {} 00:10:54.033 } 00:10:54.033 ] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.033 "name": "Existed_Raid", 00:10:54.033 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:54.033 "strip_size_kb": 0, 00:10:54.033 "state": "configuring", 00:10:54.033 "raid_level": "raid1", 00:10:54.033 "superblock": true, 00:10:54.033 "num_base_bdevs": 4, 00:10:54.033 "num_base_bdevs_discovered": 2, 00:10:54.033 "num_base_bdevs_operational": 4, 00:10:54.033 "base_bdevs_list": [ 00:10:54.033 { 00:10:54.033 "name": "BaseBdev1", 00:10:54.033 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:54.033 "is_configured": true, 00:10:54.033 "data_offset": 2048, 00:10:54.033 "data_size": 63488 00:10:54.033 }, 00:10:54.033 { 00:10:54.033 "name": "BaseBdev2", 00:10:54.033 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:54.033 "is_configured": true, 00:10:54.033 "data_offset": 2048, 00:10:54.033 "data_size": 63488 00:10:54.033 }, 00:10:54.033 { 00:10:54.033 "name": "BaseBdev3", 00:10:54.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.033 "is_configured": false, 00:10:54.033 "data_offset": 0, 00:10:54.033 "data_size": 0 00:10:54.033 }, 00:10:54.033 { 00:10:54.033 "name": "BaseBdev4", 00:10:54.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.033 "is_configured": false, 00:10:54.033 "data_offset": 0, 00:10:54.033 "data_size": 0 00:10:54.033 } 00:10:54.033 ] 00:10:54.033 }' 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.033 22:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.292 [2024-11-26 22:55:33.412731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.292 BaseBdev3 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.292 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.551 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.551 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.551 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.551 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.551 [ 00:10:54.551 { 00:10:54.551 "name": "BaseBdev3", 00:10:54.551 "aliases": [ 00:10:54.551 "920b9e5f-3b90-43d1-88ee-712a8b652bfc" 00:10:54.551 ], 00:10:54.551 "product_name": "Malloc disk", 00:10:54.551 "block_size": 512, 00:10:54.551 "num_blocks": 65536, 00:10:54.551 "uuid": "920b9e5f-3b90-43d1-88ee-712a8b652bfc", 00:10:54.551 "assigned_rate_limits": { 00:10:54.551 "rw_ios_per_sec": 0, 00:10:54.551 "rw_mbytes_per_sec": 0, 00:10:54.551 "r_mbytes_per_sec": 0, 00:10:54.551 "w_mbytes_per_sec": 0 00:10:54.551 }, 00:10:54.551 "claimed": true, 00:10:54.551 "claim_type": "exclusive_write", 00:10:54.551 "zoned": false, 00:10:54.551 "supported_io_types": { 00:10:54.551 "read": true, 00:10:54.551 "write": true, 00:10:54.551 "unmap": true, 00:10:54.551 "flush": true, 00:10:54.551 "reset": true, 00:10:54.551 "nvme_admin": false, 00:10:54.551 "nvme_io": false, 00:10:54.551 "nvme_io_md": false, 00:10:54.552 "write_zeroes": true, 00:10:54.552 "zcopy": true, 00:10:54.552 "get_zone_info": false, 00:10:54.552 "zone_management": false, 00:10:54.552 "zone_append": false, 00:10:54.552 "compare": false, 00:10:54.552 "compare_and_write": false, 00:10:54.552 "abort": true, 00:10:54.552 "seek_hole": false, 00:10:54.552 "seek_data": false, 00:10:54.552 "copy": true, 00:10:54.552 "nvme_iov_md": false 00:10:54.552 }, 00:10:54.552 "memory_domains": [ 00:10:54.552 { 00:10:54.552 "dma_device_id": "system", 00:10:54.552 "dma_device_type": 1 00:10:54.552 }, 00:10:54.552 { 00:10:54.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.552 "dma_device_type": 2 00:10:54.552 } 00:10:54.552 ], 00:10:54.552 "driver_specific": {} 00:10:54.552 } 00:10:54.552 ] 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.552 "name": "Existed_Raid", 00:10:54.552 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:54.552 "strip_size_kb": 0, 00:10:54.552 "state": "configuring", 00:10:54.552 "raid_level": "raid1", 00:10:54.552 "superblock": true, 00:10:54.552 "num_base_bdevs": 4, 00:10:54.552 "num_base_bdevs_discovered": 3, 00:10:54.552 "num_base_bdevs_operational": 4, 00:10:54.552 "base_bdevs_list": [ 00:10:54.552 { 00:10:54.552 "name": "BaseBdev1", 00:10:54.552 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:54.552 "is_configured": true, 00:10:54.552 "data_offset": 2048, 00:10:54.552 "data_size": 63488 00:10:54.552 }, 00:10:54.552 { 00:10:54.552 "name": "BaseBdev2", 00:10:54.552 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:54.552 "is_configured": true, 00:10:54.552 "data_offset": 2048, 00:10:54.552 "data_size": 63488 00:10:54.552 }, 00:10:54.552 { 00:10:54.552 "name": "BaseBdev3", 00:10:54.552 "uuid": "920b9e5f-3b90-43d1-88ee-712a8b652bfc", 00:10:54.552 "is_configured": true, 00:10:54.552 "data_offset": 2048, 00:10:54.552 "data_size": 63488 00:10:54.552 }, 00:10:54.552 { 00:10:54.552 "name": "BaseBdev4", 00:10:54.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.552 "is_configured": false, 00:10:54.552 "data_offset": 0, 00:10:54.552 "data_size": 0 00:10:54.552 } 00:10:54.552 ] 00:10:54.552 }' 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.552 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.812 [2024-11-26 22:55:33.933730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.812 BaseBdev4 00:10:54.812 [2024-11-26 22:55:33.934083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:54.812 [2024-11-26 22:55:33.934116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.812 [2024-11-26 22:55:33.934464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:54.812 [2024-11-26 22:55:33.934657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:54.812 [2024-11-26 22:55:33.934675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:54.812 [2024-11-26 22:55:33.934845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.812 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.073 [ 00:10:55.073 { 00:10:55.073 "name": "BaseBdev4", 00:10:55.073 "aliases": [ 00:10:55.073 "e4ff09d5-f56d-44c2-be31-e904c3f15462" 00:10:55.073 ], 00:10:55.073 "product_name": "Malloc disk", 00:10:55.073 "block_size": 512, 00:10:55.073 "num_blocks": 65536, 00:10:55.073 "uuid": "e4ff09d5-f56d-44c2-be31-e904c3f15462", 00:10:55.073 "assigned_rate_limits": { 00:10:55.073 "rw_ios_per_sec": 0, 00:10:55.073 "rw_mbytes_per_sec": 0, 00:10:55.073 "r_mbytes_per_sec": 0, 00:10:55.073 "w_mbytes_per_sec": 0 00:10:55.073 }, 00:10:55.073 "claimed": true, 00:10:55.073 "claim_type": "exclusive_write", 00:10:55.073 "zoned": false, 00:10:55.073 "supported_io_types": { 00:10:55.073 "read": true, 00:10:55.073 "write": true, 00:10:55.073 "unmap": true, 00:10:55.073 "flush": true, 00:10:55.073 "reset": true, 00:10:55.073 "nvme_admin": false, 00:10:55.073 "nvme_io": false, 00:10:55.073 "nvme_io_md": false, 00:10:55.073 "write_zeroes": true, 00:10:55.073 "zcopy": true, 00:10:55.073 "get_zone_info": false, 00:10:55.073 "zone_management": false, 00:10:55.073 "zone_append": false, 00:10:55.073 "compare": false, 00:10:55.073 "compare_and_write": false, 00:10:55.073 "abort": true, 00:10:55.073 "seek_hole": false, 00:10:55.073 "seek_data": false, 00:10:55.073 "copy": true, 00:10:55.073 "nvme_iov_md": false 00:10:55.073 }, 00:10:55.073 "memory_domains": [ 00:10:55.073 { 00:10:55.073 "dma_device_id": "system", 00:10:55.073 "dma_device_type": 1 00:10:55.073 }, 00:10:55.073 { 00:10:55.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.073 "dma_device_type": 2 00:10:55.073 } 00:10:55.073 ], 00:10:55.073 "driver_specific": {} 00:10:55.073 } 00:10:55.073 ] 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.073 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.074 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.074 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.074 "name": "Existed_Raid", 00:10:55.074 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:55.074 "strip_size_kb": 0, 00:10:55.074 "state": "online", 00:10:55.074 "raid_level": "raid1", 00:10:55.074 "superblock": true, 00:10:55.074 "num_base_bdevs": 4, 00:10:55.074 "num_base_bdevs_discovered": 4, 00:10:55.074 "num_base_bdevs_operational": 4, 00:10:55.074 "base_bdevs_list": [ 00:10:55.074 { 00:10:55.074 "name": "BaseBdev1", 00:10:55.074 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev2", 00:10:55.074 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev3", 00:10:55.074 "uuid": "920b9e5f-3b90-43d1-88ee-712a8b652bfc", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev4", 00:10:55.074 "uuid": "e4ff09d5-f56d-44c2-be31-e904c3f15462", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 } 00:10:55.074 ] 00:10:55.074 }' 00:10:55.074 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.074 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.342 [2024-11-26 22:55:34.414179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.342 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.342 "name": "Existed_Raid", 00:10:55.342 "aliases": [ 00:10:55.342 "fae217ef-ee5e-4100-a692-2632eadf58a8" 00:10:55.342 ], 00:10:55.342 "product_name": "Raid Volume", 00:10:55.342 "block_size": 512, 00:10:55.342 "num_blocks": 63488, 00:10:55.342 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:55.342 "assigned_rate_limits": { 00:10:55.342 "rw_ios_per_sec": 0, 00:10:55.342 "rw_mbytes_per_sec": 0, 00:10:55.342 "r_mbytes_per_sec": 0, 00:10:55.342 "w_mbytes_per_sec": 0 00:10:55.342 }, 00:10:55.342 "claimed": false, 00:10:55.342 "zoned": false, 00:10:55.342 "supported_io_types": { 00:10:55.342 "read": true, 00:10:55.342 "write": true, 00:10:55.342 "unmap": false, 00:10:55.342 "flush": false, 00:10:55.342 "reset": true, 00:10:55.342 "nvme_admin": false, 00:10:55.342 "nvme_io": false, 00:10:55.342 "nvme_io_md": false, 00:10:55.342 "write_zeroes": true, 00:10:55.342 "zcopy": false, 00:10:55.342 "get_zone_info": false, 00:10:55.342 "zone_management": false, 00:10:55.342 "zone_append": false, 00:10:55.342 "compare": false, 00:10:55.342 "compare_and_write": false, 00:10:55.342 "abort": false, 00:10:55.342 "seek_hole": false, 00:10:55.342 "seek_data": false, 00:10:55.342 "copy": false, 00:10:55.342 "nvme_iov_md": false 00:10:55.342 }, 00:10:55.342 "memory_domains": [ 00:10:55.342 { 00:10:55.342 "dma_device_id": "system", 00:10:55.342 "dma_device_type": 1 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.342 "dma_device_type": 2 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "dma_device_id": "system", 00:10:55.342 "dma_device_type": 1 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.342 "dma_device_type": 2 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "dma_device_id": "system", 00:10:55.342 "dma_device_type": 1 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.342 "dma_device_type": 2 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "dma_device_id": "system", 00:10:55.343 "dma_device_type": 1 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.343 "dma_device_type": 2 00:10:55.343 } 00:10:55.343 ], 00:10:55.343 "driver_specific": { 00:10:55.343 "raid": { 00:10:55.343 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:55.343 "strip_size_kb": 0, 00:10:55.343 "state": "online", 00:10:55.343 "raid_level": "raid1", 00:10:55.343 "superblock": true, 00:10:55.343 "num_base_bdevs": 4, 00:10:55.343 "num_base_bdevs_discovered": 4, 00:10:55.343 "num_base_bdevs_operational": 4, 00:10:55.343 "base_bdevs_list": [ 00:10:55.343 { 00:10:55.343 "name": "BaseBdev1", 00:10:55.343 "uuid": "22e844e6-8555-468a-83ca-cf94227d92ef", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev2", 00:10:55.343 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev3", 00:10:55.343 "uuid": "920b9e5f-3b90-43d1-88ee-712a8b652bfc", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev4", 00:10:55.343 "uuid": "e4ff09d5-f56d-44c2-be31-e904c3f15462", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 } 00:10:55.343 ] 00:10:55.343 } 00:10:55.343 } 00:10:55.343 }' 00:10:55.343 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:55.633 BaseBdev2 00:10:55.633 BaseBdev3 00:10:55.633 BaseBdev4' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.633 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.633 [2024-11-26 22:55:34.754027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.893 "name": "Existed_Raid", 00:10:55.893 "uuid": "fae217ef-ee5e-4100-a692-2632eadf58a8", 00:10:55.893 "strip_size_kb": 0, 00:10:55.893 "state": "online", 00:10:55.893 "raid_level": "raid1", 00:10:55.893 "superblock": true, 00:10:55.893 "num_base_bdevs": 4, 00:10:55.893 "num_base_bdevs_discovered": 3, 00:10:55.893 "num_base_bdevs_operational": 3, 00:10:55.893 "base_bdevs_list": [ 00:10:55.893 { 00:10:55.893 "name": null, 00:10:55.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.893 "is_configured": false, 00:10:55.893 "data_offset": 0, 00:10:55.893 "data_size": 63488 00:10:55.893 }, 00:10:55.893 { 00:10:55.893 "name": "BaseBdev2", 00:10:55.893 "uuid": "705f640f-1e3d-48c7-a19d-f3759d1bd1e7", 00:10:55.893 "is_configured": true, 00:10:55.893 "data_offset": 2048, 00:10:55.893 "data_size": 63488 00:10:55.893 }, 00:10:55.893 { 00:10:55.893 "name": "BaseBdev3", 00:10:55.893 "uuid": "920b9e5f-3b90-43d1-88ee-712a8b652bfc", 00:10:55.893 "is_configured": true, 00:10:55.893 "data_offset": 2048, 00:10:55.893 "data_size": 63488 00:10:55.893 }, 00:10:55.893 { 00:10:55.893 "name": "BaseBdev4", 00:10:55.893 "uuid": "e4ff09d5-f56d-44c2-be31-e904c3f15462", 00:10:55.893 "is_configured": true, 00:10:55.893 "data_offset": 2048, 00:10:55.893 "data_size": 63488 00:10:55.893 } 00:10:55.893 ] 00:10:55.893 }' 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.893 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.152 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 [2024-11-26 22:55:35.291167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 [2024-11-26 22:55:35.372042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 [2024-11-26 22:55:35.452684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:56.413 [2024-11-26 22:55:35.452814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.413 [2024-11-26 22:55:35.473627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.413 [2024-11-26 22:55:35.473760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.413 [2024-11-26 22:55:35.473811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.413 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.672 BaseBdev2 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.672 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.672 [ 00:10:56.672 { 00:10:56.672 "name": "BaseBdev2", 00:10:56.672 "aliases": [ 00:10:56.672 "8fd024be-529e-4e48-86b6-8a871d140644" 00:10:56.672 ], 00:10:56.673 "product_name": "Malloc disk", 00:10:56.673 "block_size": 512, 00:10:56.673 "num_blocks": 65536, 00:10:56.673 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:56.673 "assigned_rate_limits": { 00:10:56.673 "rw_ios_per_sec": 0, 00:10:56.673 "rw_mbytes_per_sec": 0, 00:10:56.673 "r_mbytes_per_sec": 0, 00:10:56.673 "w_mbytes_per_sec": 0 00:10:56.673 }, 00:10:56.673 "claimed": false, 00:10:56.673 "zoned": false, 00:10:56.673 "supported_io_types": { 00:10:56.673 "read": true, 00:10:56.673 "write": true, 00:10:56.673 "unmap": true, 00:10:56.673 "flush": true, 00:10:56.673 "reset": true, 00:10:56.673 "nvme_admin": false, 00:10:56.673 "nvme_io": false, 00:10:56.673 "nvme_io_md": false, 00:10:56.673 "write_zeroes": true, 00:10:56.673 "zcopy": true, 00:10:56.673 "get_zone_info": false, 00:10:56.673 "zone_management": false, 00:10:56.673 "zone_append": false, 00:10:56.673 "compare": false, 00:10:56.673 "compare_and_write": false, 00:10:56.673 "abort": true, 00:10:56.673 "seek_hole": false, 00:10:56.673 "seek_data": false, 00:10:56.673 "copy": true, 00:10:56.673 "nvme_iov_md": false 00:10:56.673 }, 00:10:56.673 "memory_domains": [ 00:10:56.673 { 00:10:56.673 "dma_device_id": "system", 00:10:56.673 "dma_device_type": 1 00:10:56.673 }, 00:10:56.673 { 00:10:56.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.673 "dma_device_type": 2 00:10:56.673 } 00:10:56.673 ], 00:10:56.673 "driver_specific": {} 00:10:56.673 } 00:10:56.673 ] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 BaseBdev3 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 [ 00:10:56.673 { 00:10:56.673 "name": "BaseBdev3", 00:10:56.673 "aliases": [ 00:10:56.673 "232cf1b4-d182-4fb6-9fbb-4306bd7896c9" 00:10:56.673 ], 00:10:56.673 "product_name": "Malloc disk", 00:10:56.673 "block_size": 512, 00:10:56.673 "num_blocks": 65536, 00:10:56.673 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:56.673 "assigned_rate_limits": { 00:10:56.673 "rw_ios_per_sec": 0, 00:10:56.673 "rw_mbytes_per_sec": 0, 00:10:56.673 "r_mbytes_per_sec": 0, 00:10:56.673 "w_mbytes_per_sec": 0 00:10:56.673 }, 00:10:56.673 "claimed": false, 00:10:56.673 "zoned": false, 00:10:56.673 "supported_io_types": { 00:10:56.673 "read": true, 00:10:56.673 "write": true, 00:10:56.673 "unmap": true, 00:10:56.673 "flush": true, 00:10:56.673 "reset": true, 00:10:56.673 "nvme_admin": false, 00:10:56.673 "nvme_io": false, 00:10:56.673 "nvme_io_md": false, 00:10:56.673 "write_zeroes": true, 00:10:56.673 "zcopy": true, 00:10:56.673 "get_zone_info": false, 00:10:56.673 "zone_management": false, 00:10:56.673 "zone_append": false, 00:10:56.673 "compare": false, 00:10:56.673 "compare_and_write": false, 00:10:56.673 "abort": true, 00:10:56.673 "seek_hole": false, 00:10:56.673 "seek_data": false, 00:10:56.673 "copy": true, 00:10:56.673 "nvme_iov_md": false 00:10:56.673 }, 00:10:56.673 "memory_domains": [ 00:10:56.673 { 00:10:56.673 "dma_device_id": "system", 00:10:56.673 "dma_device_type": 1 00:10:56.673 }, 00:10:56.673 { 00:10:56.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.673 "dma_device_type": 2 00:10:56.673 } 00:10:56.673 ], 00:10:56.673 "driver_specific": {} 00:10:56.673 } 00:10:56.673 ] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 BaseBdev4 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 [ 00:10:56.674 { 00:10:56.674 "name": "BaseBdev4", 00:10:56.674 "aliases": [ 00:10:56.674 "ff9dbb91-4995-43f0-95c5-3d65c04342be" 00:10:56.674 ], 00:10:56.674 "product_name": "Malloc disk", 00:10:56.674 "block_size": 512, 00:10:56.674 "num_blocks": 65536, 00:10:56.674 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:56.674 "assigned_rate_limits": { 00:10:56.674 "rw_ios_per_sec": 0, 00:10:56.674 "rw_mbytes_per_sec": 0, 00:10:56.674 "r_mbytes_per_sec": 0, 00:10:56.674 "w_mbytes_per_sec": 0 00:10:56.674 }, 00:10:56.674 "claimed": false, 00:10:56.674 "zoned": false, 00:10:56.674 "supported_io_types": { 00:10:56.674 "read": true, 00:10:56.674 "write": true, 00:10:56.674 "unmap": true, 00:10:56.674 "flush": true, 00:10:56.674 "reset": true, 00:10:56.674 "nvme_admin": false, 00:10:56.674 "nvme_io": false, 00:10:56.674 "nvme_io_md": false, 00:10:56.674 "write_zeroes": true, 00:10:56.674 "zcopy": true, 00:10:56.674 "get_zone_info": false, 00:10:56.674 "zone_management": false, 00:10:56.674 "zone_append": false, 00:10:56.674 "compare": false, 00:10:56.674 "compare_and_write": false, 00:10:56.674 "abort": true, 00:10:56.674 "seek_hole": false, 00:10:56.674 "seek_data": false, 00:10:56.674 "copy": true, 00:10:56.674 "nvme_iov_md": false 00:10:56.674 }, 00:10:56.674 "memory_domains": [ 00:10:56.674 { 00:10:56.674 "dma_device_id": "system", 00:10:56.674 "dma_device_type": 1 00:10:56.674 }, 00:10:56.674 { 00:10:56.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.674 "dma_device_type": 2 00:10:56.674 } 00:10:56.674 ], 00:10:56.674 "driver_specific": {} 00:10:56.674 } 00:10:56.674 ] 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 [2024-11-26 22:55:35.705958] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.674 [2024-11-26 22:55:35.706055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.674 [2024-11-26 22:55:35.706111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.674 [2024-11-26 22:55:35.708306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.674 [2024-11-26 22:55:35.708423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.674 "name": "Existed_Raid", 00:10:56.674 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:56.674 "strip_size_kb": 0, 00:10:56.674 "state": "configuring", 00:10:56.674 "raid_level": "raid1", 00:10:56.674 "superblock": true, 00:10:56.674 "num_base_bdevs": 4, 00:10:56.674 "num_base_bdevs_discovered": 3, 00:10:56.674 "num_base_bdevs_operational": 4, 00:10:56.674 "base_bdevs_list": [ 00:10:56.674 { 00:10:56.674 "name": "BaseBdev1", 00:10:56.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.674 "is_configured": false, 00:10:56.674 "data_offset": 0, 00:10:56.674 "data_size": 0 00:10:56.674 }, 00:10:56.674 { 00:10:56.674 "name": "BaseBdev2", 00:10:56.674 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:56.674 "is_configured": true, 00:10:56.674 "data_offset": 2048, 00:10:56.674 "data_size": 63488 00:10:56.674 }, 00:10:56.674 { 00:10:56.674 "name": "BaseBdev3", 00:10:56.674 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:56.674 "is_configured": true, 00:10:56.674 "data_offset": 2048, 00:10:56.674 "data_size": 63488 00:10:56.674 }, 00:10:56.674 { 00:10:56.674 "name": "BaseBdev4", 00:10:56.674 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:56.674 "is_configured": true, 00:10:56.674 "data_offset": 2048, 00:10:56.674 "data_size": 63488 00:10:56.674 } 00:10:56.674 ] 00:10:56.674 }' 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.674 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.241 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.241 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.241 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.242 [2024-11-26 22:55:36.190059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.242 "name": "Existed_Raid", 00:10:57.242 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:57.242 "strip_size_kb": 0, 00:10:57.242 "state": "configuring", 00:10:57.242 "raid_level": "raid1", 00:10:57.242 "superblock": true, 00:10:57.242 "num_base_bdevs": 4, 00:10:57.242 "num_base_bdevs_discovered": 2, 00:10:57.242 "num_base_bdevs_operational": 4, 00:10:57.242 "base_bdevs_list": [ 00:10:57.242 { 00:10:57.242 "name": "BaseBdev1", 00:10:57.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.242 "is_configured": false, 00:10:57.242 "data_offset": 0, 00:10:57.242 "data_size": 0 00:10:57.242 }, 00:10:57.242 { 00:10:57.242 "name": null, 00:10:57.242 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:57.242 "is_configured": false, 00:10:57.242 "data_offset": 0, 00:10:57.242 "data_size": 63488 00:10:57.242 }, 00:10:57.242 { 00:10:57.242 "name": "BaseBdev3", 00:10:57.242 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:57.242 "is_configured": true, 00:10:57.242 "data_offset": 2048, 00:10:57.242 "data_size": 63488 00:10:57.242 }, 00:10:57.242 { 00:10:57.242 "name": "BaseBdev4", 00:10:57.242 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:57.242 "is_configured": true, 00:10:57.242 "data_offset": 2048, 00:10:57.242 "data_size": 63488 00:10:57.242 } 00:10:57.242 ] 00:10:57.242 }' 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.242 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.500 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.500 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.500 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.500 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 [2024-11-26 22:55:36.675103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.760 BaseBdev1 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 [ 00:10:57.760 { 00:10:57.760 "name": "BaseBdev1", 00:10:57.760 "aliases": [ 00:10:57.760 "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9" 00:10:57.760 ], 00:10:57.760 "product_name": "Malloc disk", 00:10:57.760 "block_size": 512, 00:10:57.760 "num_blocks": 65536, 00:10:57.760 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:57.760 "assigned_rate_limits": { 00:10:57.760 "rw_ios_per_sec": 0, 00:10:57.760 "rw_mbytes_per_sec": 0, 00:10:57.760 "r_mbytes_per_sec": 0, 00:10:57.760 "w_mbytes_per_sec": 0 00:10:57.760 }, 00:10:57.760 "claimed": true, 00:10:57.760 "claim_type": "exclusive_write", 00:10:57.760 "zoned": false, 00:10:57.760 "supported_io_types": { 00:10:57.760 "read": true, 00:10:57.760 "write": true, 00:10:57.760 "unmap": true, 00:10:57.760 "flush": true, 00:10:57.760 "reset": true, 00:10:57.760 "nvme_admin": false, 00:10:57.760 "nvme_io": false, 00:10:57.760 "nvme_io_md": false, 00:10:57.760 "write_zeroes": true, 00:10:57.760 "zcopy": true, 00:10:57.760 "get_zone_info": false, 00:10:57.760 "zone_management": false, 00:10:57.760 "zone_append": false, 00:10:57.760 "compare": false, 00:10:57.760 "compare_and_write": false, 00:10:57.760 "abort": true, 00:10:57.760 "seek_hole": false, 00:10:57.760 "seek_data": false, 00:10:57.760 "copy": true, 00:10:57.760 "nvme_iov_md": false 00:10:57.760 }, 00:10:57.760 "memory_domains": [ 00:10:57.760 { 00:10:57.760 "dma_device_id": "system", 00:10:57.760 "dma_device_type": 1 00:10:57.760 }, 00:10:57.760 { 00:10:57.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.760 "dma_device_type": 2 00:10:57.760 } 00:10:57.760 ], 00:10:57.760 "driver_specific": {} 00:10:57.760 } 00:10:57.760 ] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.760 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.760 "name": "Existed_Raid", 00:10:57.760 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:57.760 "strip_size_kb": 0, 00:10:57.760 "state": "configuring", 00:10:57.760 "raid_level": "raid1", 00:10:57.760 "superblock": true, 00:10:57.760 "num_base_bdevs": 4, 00:10:57.760 "num_base_bdevs_discovered": 3, 00:10:57.760 "num_base_bdevs_operational": 4, 00:10:57.760 "base_bdevs_list": [ 00:10:57.760 { 00:10:57.760 "name": "BaseBdev1", 00:10:57.760 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:57.760 "is_configured": true, 00:10:57.760 "data_offset": 2048, 00:10:57.760 "data_size": 63488 00:10:57.760 }, 00:10:57.760 { 00:10:57.760 "name": null, 00:10:57.760 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:57.760 "is_configured": false, 00:10:57.760 "data_offset": 0, 00:10:57.760 "data_size": 63488 00:10:57.760 }, 00:10:57.760 { 00:10:57.760 "name": "BaseBdev3", 00:10:57.760 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:57.760 "is_configured": true, 00:10:57.761 "data_offset": 2048, 00:10:57.761 "data_size": 63488 00:10:57.761 }, 00:10:57.761 { 00:10:57.761 "name": "BaseBdev4", 00:10:57.761 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:57.761 "is_configured": true, 00:10:57.761 "data_offset": 2048, 00:10:57.761 "data_size": 63488 00:10:57.761 } 00:10:57.761 ] 00:10:57.761 }' 00:10:57.761 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.761 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.020 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.020 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.020 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.281 [2024-11-26 22:55:37.155246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.281 "name": "Existed_Raid", 00:10:58.281 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:58.281 "strip_size_kb": 0, 00:10:58.281 "state": "configuring", 00:10:58.281 "raid_level": "raid1", 00:10:58.281 "superblock": true, 00:10:58.281 "num_base_bdevs": 4, 00:10:58.281 "num_base_bdevs_discovered": 2, 00:10:58.281 "num_base_bdevs_operational": 4, 00:10:58.281 "base_bdevs_list": [ 00:10:58.281 { 00:10:58.281 "name": "BaseBdev1", 00:10:58.281 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:58.281 "is_configured": true, 00:10:58.281 "data_offset": 2048, 00:10:58.281 "data_size": 63488 00:10:58.281 }, 00:10:58.281 { 00:10:58.281 "name": null, 00:10:58.281 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:58.281 "is_configured": false, 00:10:58.281 "data_offset": 0, 00:10:58.281 "data_size": 63488 00:10:58.281 }, 00:10:58.281 { 00:10:58.281 "name": null, 00:10:58.281 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:58.281 "is_configured": false, 00:10:58.281 "data_offset": 0, 00:10:58.281 "data_size": 63488 00:10:58.281 }, 00:10:58.281 { 00:10:58.281 "name": "BaseBdev4", 00:10:58.281 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:58.281 "is_configured": true, 00:10:58.281 "data_offset": 2048, 00:10:58.281 "data_size": 63488 00:10:58.281 } 00:10:58.281 ] 00:10:58.281 }' 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.281 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 [2024-11-26 22:55:37.615445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.803 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.803 "name": "Existed_Raid", 00:10:58.803 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:58.803 "strip_size_kb": 0, 00:10:58.803 "state": "configuring", 00:10:58.803 "raid_level": "raid1", 00:10:58.803 "superblock": true, 00:10:58.803 "num_base_bdevs": 4, 00:10:58.803 "num_base_bdevs_discovered": 3, 00:10:58.803 "num_base_bdevs_operational": 4, 00:10:58.803 "base_bdevs_list": [ 00:10:58.803 { 00:10:58.803 "name": "BaseBdev1", 00:10:58.803 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 2048, 00:10:58.803 "data_size": 63488 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": null, 00:10:58.803 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:58.803 "is_configured": false, 00:10:58.803 "data_offset": 0, 00:10:58.803 "data_size": 63488 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": "BaseBdev3", 00:10:58.803 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 2048, 00:10:58.803 "data_size": 63488 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": "BaseBdev4", 00:10:58.803 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 2048, 00:10:58.803 "data_size": 63488 00:10:58.803 } 00:10:58.803 ] 00:10:58.803 }' 00:10:58.803 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.803 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.063 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.063 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.063 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.064 [2024-11-26 22:55:38.083620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.064 "name": "Existed_Raid", 00:10:59.064 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:59.064 "strip_size_kb": 0, 00:10:59.064 "state": "configuring", 00:10:59.064 "raid_level": "raid1", 00:10:59.064 "superblock": true, 00:10:59.064 "num_base_bdevs": 4, 00:10:59.064 "num_base_bdevs_discovered": 2, 00:10:59.064 "num_base_bdevs_operational": 4, 00:10:59.064 "base_bdevs_list": [ 00:10:59.064 { 00:10:59.064 "name": null, 00:10:59.064 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:59.064 "is_configured": false, 00:10:59.064 "data_offset": 0, 00:10:59.064 "data_size": 63488 00:10:59.064 }, 00:10:59.064 { 00:10:59.064 "name": null, 00:10:59.064 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:59.064 "is_configured": false, 00:10:59.064 "data_offset": 0, 00:10:59.064 "data_size": 63488 00:10:59.064 }, 00:10:59.064 { 00:10:59.064 "name": "BaseBdev3", 00:10:59.064 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:59.064 "is_configured": true, 00:10:59.064 "data_offset": 2048, 00:10:59.064 "data_size": 63488 00:10:59.064 }, 00:10:59.064 { 00:10:59.064 "name": "BaseBdev4", 00:10:59.064 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:59.064 "is_configured": true, 00:10:59.064 "data_offset": 2048, 00:10:59.064 "data_size": 63488 00:10:59.064 } 00:10:59.064 ] 00:10:59.064 }' 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.064 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.635 [2024-11-26 22:55:38.600083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.635 "name": "Existed_Raid", 00:10:59.635 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:10:59.635 "strip_size_kb": 0, 00:10:59.635 "state": "configuring", 00:10:59.635 "raid_level": "raid1", 00:10:59.635 "superblock": true, 00:10:59.635 "num_base_bdevs": 4, 00:10:59.635 "num_base_bdevs_discovered": 3, 00:10:59.635 "num_base_bdevs_operational": 4, 00:10:59.635 "base_bdevs_list": [ 00:10:59.635 { 00:10:59.635 "name": null, 00:10:59.635 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:10:59.635 "is_configured": false, 00:10:59.635 "data_offset": 0, 00:10:59.635 "data_size": 63488 00:10:59.635 }, 00:10:59.635 { 00:10:59.635 "name": "BaseBdev2", 00:10:59.635 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:10:59.635 "is_configured": true, 00:10:59.635 "data_offset": 2048, 00:10:59.635 "data_size": 63488 00:10:59.635 }, 00:10:59.635 { 00:10:59.635 "name": "BaseBdev3", 00:10:59.635 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:10:59.635 "is_configured": true, 00:10:59.635 "data_offset": 2048, 00:10:59.635 "data_size": 63488 00:10:59.635 }, 00:10:59.635 { 00:10:59.635 "name": "BaseBdev4", 00:10:59.635 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:10:59.635 "is_configured": true, 00:10:59.635 "data_offset": 2048, 00:10:59.635 "data_size": 63488 00:10:59.635 } 00:10:59.635 ] 00:10:59.635 }' 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.635 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 [2024-11-26 22:55:39.157043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.205 [2024-11-26 22:55:39.157296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.205 [2024-11-26 22:55:39.157314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.205 [2024-11-26 22:55:39.157583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:00.205 NewBaseBdev 00:11:00.205 [2024-11-26 22:55:39.157745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.205 [2024-11-26 22:55:39.157768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:00.205 [2024-11-26 22:55:39.157891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.205 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 [ 00:11:00.205 { 00:11:00.205 "name": "NewBaseBdev", 00:11:00.205 "aliases": [ 00:11:00.205 "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9" 00:11:00.205 ], 00:11:00.205 "product_name": "Malloc disk", 00:11:00.205 "block_size": 512, 00:11:00.205 "num_blocks": 65536, 00:11:00.205 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:11:00.205 "assigned_rate_limits": { 00:11:00.205 "rw_ios_per_sec": 0, 00:11:00.205 "rw_mbytes_per_sec": 0, 00:11:00.205 "r_mbytes_per_sec": 0, 00:11:00.205 "w_mbytes_per_sec": 0 00:11:00.205 }, 00:11:00.205 "claimed": true, 00:11:00.205 "claim_type": "exclusive_write", 00:11:00.205 "zoned": false, 00:11:00.205 "supported_io_types": { 00:11:00.205 "read": true, 00:11:00.205 "write": true, 00:11:00.205 "unmap": true, 00:11:00.205 "flush": true, 00:11:00.205 "reset": true, 00:11:00.205 "nvme_admin": false, 00:11:00.205 "nvme_io": false, 00:11:00.205 "nvme_io_md": false, 00:11:00.205 "write_zeroes": true, 00:11:00.205 "zcopy": true, 00:11:00.205 "get_zone_info": false, 00:11:00.205 "zone_management": false, 00:11:00.205 "zone_append": false, 00:11:00.205 "compare": false, 00:11:00.205 "compare_and_write": false, 00:11:00.205 "abort": true, 00:11:00.205 "seek_hole": false, 00:11:00.205 "seek_data": false, 00:11:00.205 "copy": true, 00:11:00.205 "nvme_iov_md": false 00:11:00.205 }, 00:11:00.205 "memory_domains": [ 00:11:00.205 { 00:11:00.205 "dma_device_id": "system", 00:11:00.205 "dma_device_type": 1 00:11:00.205 }, 00:11:00.205 { 00:11:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.205 "dma_device_type": 2 00:11:00.205 } 00:11:00.205 ], 00:11:00.205 "driver_specific": {} 00:11:00.205 } 00:11:00.206 ] 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.206 "name": "Existed_Raid", 00:11:00.206 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:11:00.206 "strip_size_kb": 0, 00:11:00.206 "state": "online", 00:11:00.206 "raid_level": "raid1", 00:11:00.206 "superblock": true, 00:11:00.206 "num_base_bdevs": 4, 00:11:00.206 "num_base_bdevs_discovered": 4, 00:11:00.206 "num_base_bdevs_operational": 4, 00:11:00.206 "base_bdevs_list": [ 00:11:00.206 { 00:11:00.206 "name": "NewBaseBdev", 00:11:00.206 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:11:00.206 "is_configured": true, 00:11:00.206 "data_offset": 2048, 00:11:00.206 "data_size": 63488 00:11:00.206 }, 00:11:00.206 { 00:11:00.206 "name": "BaseBdev2", 00:11:00.206 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:11:00.206 "is_configured": true, 00:11:00.206 "data_offset": 2048, 00:11:00.206 "data_size": 63488 00:11:00.206 }, 00:11:00.206 { 00:11:00.206 "name": "BaseBdev3", 00:11:00.206 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:11:00.206 "is_configured": true, 00:11:00.206 "data_offset": 2048, 00:11:00.206 "data_size": 63488 00:11:00.206 }, 00:11:00.206 { 00:11:00.206 "name": "BaseBdev4", 00:11:00.206 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:11:00.206 "is_configured": true, 00:11:00.206 "data_offset": 2048, 00:11:00.206 "data_size": 63488 00:11:00.206 } 00:11:00.206 ] 00:11:00.206 }' 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.206 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.776 [2024-11-26 22:55:39.685496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.776 "name": "Existed_Raid", 00:11:00.776 "aliases": [ 00:11:00.776 "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a" 00:11:00.776 ], 00:11:00.776 "product_name": "Raid Volume", 00:11:00.776 "block_size": 512, 00:11:00.776 "num_blocks": 63488, 00:11:00.776 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:11:00.776 "assigned_rate_limits": { 00:11:00.776 "rw_ios_per_sec": 0, 00:11:00.776 "rw_mbytes_per_sec": 0, 00:11:00.776 "r_mbytes_per_sec": 0, 00:11:00.776 "w_mbytes_per_sec": 0 00:11:00.776 }, 00:11:00.776 "claimed": false, 00:11:00.776 "zoned": false, 00:11:00.776 "supported_io_types": { 00:11:00.776 "read": true, 00:11:00.776 "write": true, 00:11:00.776 "unmap": false, 00:11:00.776 "flush": false, 00:11:00.776 "reset": true, 00:11:00.776 "nvme_admin": false, 00:11:00.776 "nvme_io": false, 00:11:00.776 "nvme_io_md": false, 00:11:00.776 "write_zeroes": true, 00:11:00.776 "zcopy": false, 00:11:00.776 "get_zone_info": false, 00:11:00.776 "zone_management": false, 00:11:00.776 "zone_append": false, 00:11:00.776 "compare": false, 00:11:00.776 "compare_and_write": false, 00:11:00.776 "abort": false, 00:11:00.776 "seek_hole": false, 00:11:00.776 "seek_data": false, 00:11:00.776 "copy": false, 00:11:00.776 "nvme_iov_md": false 00:11:00.776 }, 00:11:00.776 "memory_domains": [ 00:11:00.776 { 00:11:00.776 "dma_device_id": "system", 00:11:00.776 "dma_device_type": 1 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.776 "dma_device_type": 2 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "system", 00:11:00.776 "dma_device_type": 1 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.776 "dma_device_type": 2 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "system", 00:11:00.776 "dma_device_type": 1 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.776 "dma_device_type": 2 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "system", 00:11:00.776 "dma_device_type": 1 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.776 "dma_device_type": 2 00:11:00.776 } 00:11:00.776 ], 00:11:00.776 "driver_specific": { 00:11:00.776 "raid": { 00:11:00.776 "uuid": "f9f5da0d-9da1-4d1e-910a-daf2e6954a4a", 00:11:00.776 "strip_size_kb": 0, 00:11:00.776 "state": "online", 00:11:00.776 "raid_level": "raid1", 00:11:00.776 "superblock": true, 00:11:00.776 "num_base_bdevs": 4, 00:11:00.776 "num_base_bdevs_discovered": 4, 00:11:00.776 "num_base_bdevs_operational": 4, 00:11:00.776 "base_bdevs_list": [ 00:11:00.776 { 00:11:00.776 "name": "NewBaseBdev", 00:11:00.776 "uuid": "20bbd7e3-bdb9-47ed-a3c1-e7366103bbc9", 00:11:00.776 "is_configured": true, 00:11:00.776 "data_offset": 2048, 00:11:00.776 "data_size": 63488 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "name": "BaseBdev2", 00:11:00.776 "uuid": "8fd024be-529e-4e48-86b6-8a871d140644", 00:11:00.776 "is_configured": true, 00:11:00.776 "data_offset": 2048, 00:11:00.776 "data_size": 63488 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "name": "BaseBdev3", 00:11:00.776 "uuid": "232cf1b4-d182-4fb6-9fbb-4306bd7896c9", 00:11:00.776 "is_configured": true, 00:11:00.776 "data_offset": 2048, 00:11:00.776 "data_size": 63488 00:11:00.776 }, 00:11:00.776 { 00:11:00.776 "name": "BaseBdev4", 00:11:00.776 "uuid": "ff9dbb91-4995-43f0-95c5-3d65c04342be", 00:11:00.776 "is_configured": true, 00:11:00.776 "data_offset": 2048, 00:11:00.776 "data_size": 63488 00:11:00.776 } 00:11:00.776 ] 00:11:00.776 } 00:11:00.776 } 00:11:00.776 }' 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:00.776 BaseBdev2 00:11:00.776 BaseBdev3 00:11:00.776 BaseBdev4' 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.776 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.777 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.037 22:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 [2024-11-26 22:55:40.001274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.037 [2024-11-26 22:55:40.001308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.037 [2024-11-26 22:55:40.001392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.038 [2024-11-26 22:55:40.001705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.038 [2024-11-26 22:55:40.001726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86245 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86245 ']' 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86245 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86245 00:11:01.038 killing process with pid 86245 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86245' 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86245 00:11:01.038 [2024-11-26 22:55:40.052784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.038 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86245 00:11:01.038 [2024-11-26 22:55:40.127092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.610 22:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.610 00:11:01.610 real 0m10.055s 00:11:01.610 user 0m16.836s 00:11:01.610 sys 0m2.197s 00:11:01.610 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.610 ************************************ 00:11:01.610 END TEST raid_state_function_test_sb 00:11:01.610 ************************************ 00:11:01.610 22:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.610 22:55:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:01.610 22:55:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.610 22:55:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.610 22:55:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.610 ************************************ 00:11:01.610 START TEST raid_superblock_test 00:11:01.610 ************************************ 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86899 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86899 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 86899 ']' 00:11:01.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.610 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.610 [2024-11-26 22:55:40.635785] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:01.610 [2024-11-26 22:55:40.635932] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86899 ] 00:11:01.871 [2024-11-26 22:55:40.775422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:01.871 [2024-11-26 22:55:40.811973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.871 [2024-11-26 22:55:40.852563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.871 [2024-11-26 22:55:40.928851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.871 [2024-11-26 22:55:40.928899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.441 malloc1 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.441 [2024-11-26 22:55:41.495940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.441 [2024-11-26 22:55:41.496065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.441 [2024-11-26 22:55:41.496149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.441 [2024-11-26 22:55:41.496192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.441 [2024-11-26 22:55:41.498733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.441 [2024-11-26 22:55:41.498817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.441 pt1 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.441 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 malloc2 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 [2024-11-26 22:55:41.534734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.442 [2024-11-26 22:55:41.534844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.442 [2024-11-26 22:55:41.534872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:02.442 [2024-11-26 22:55:41.534883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.442 [2024-11-26 22:55:41.537323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.442 [2024-11-26 22:55:41.537360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.442 pt2 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 malloc3 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.701 [2024-11-26 22:55:41.569391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.701 [2024-11-26 22:55:41.569507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.701 [2024-11-26 22:55:41.569554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:02.701 [2024-11-26 22:55:41.569616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.702 [2024-11-26 22:55:41.572043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.702 [2024-11-26 22:55:41.572125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.702 pt3 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.702 malloc4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.702 [2024-11-26 22:55:41.627301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:02.702 [2024-11-26 22:55:41.627467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.702 [2024-11-26 22:55:41.627547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:02.702 [2024-11-26 22:55:41.627632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.702 [2024-11-26 22:55:41.631343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.702 [2024-11-26 22:55:41.631460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:02.702 pt4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.702 [2024-11-26 22:55:41.639775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.702 [2024-11-26 22:55:41.642163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.702 [2024-11-26 22:55:41.642321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.702 [2024-11-26 22:55:41.642448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:02.702 [2024-11-26 22:55:41.642718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:02.702 [2024-11-26 22:55:41.642786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.702 [2024-11-26 22:55:41.643148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:02.702 [2024-11-26 22:55:41.643411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:02.702 [2024-11-26 22:55:41.643475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:02.702 [2024-11-26 22:55:41.643717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.702 "name": "raid_bdev1", 00:11:02.702 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:02.702 "strip_size_kb": 0, 00:11:02.702 "state": "online", 00:11:02.702 "raid_level": "raid1", 00:11:02.702 "superblock": true, 00:11:02.702 "num_base_bdevs": 4, 00:11:02.702 "num_base_bdevs_discovered": 4, 00:11:02.702 "num_base_bdevs_operational": 4, 00:11:02.702 "base_bdevs_list": [ 00:11:02.702 { 00:11:02.702 "name": "pt1", 00:11:02.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.702 "is_configured": true, 00:11:02.702 "data_offset": 2048, 00:11:02.702 "data_size": 63488 00:11:02.702 }, 00:11:02.702 { 00:11:02.702 "name": "pt2", 00:11:02.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.702 "is_configured": true, 00:11:02.702 "data_offset": 2048, 00:11:02.702 "data_size": 63488 00:11:02.702 }, 00:11:02.702 { 00:11:02.702 "name": "pt3", 00:11:02.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.702 "is_configured": true, 00:11:02.702 "data_offset": 2048, 00:11:02.702 "data_size": 63488 00:11:02.702 }, 00:11:02.702 { 00:11:02.702 "name": "pt4", 00:11:02.702 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.702 "is_configured": true, 00:11:02.702 "data_offset": 2048, 00:11:02.702 "data_size": 63488 00:11:02.702 } 00:11:02.702 ] 00:11:02.702 }' 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.702 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.271 [2024-11-26 22:55:42.104188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.271 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.271 "name": "raid_bdev1", 00:11:03.271 "aliases": [ 00:11:03.271 "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f" 00:11:03.271 ], 00:11:03.271 "product_name": "Raid Volume", 00:11:03.271 "block_size": 512, 00:11:03.271 "num_blocks": 63488, 00:11:03.271 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:03.271 "assigned_rate_limits": { 00:11:03.271 "rw_ios_per_sec": 0, 00:11:03.271 "rw_mbytes_per_sec": 0, 00:11:03.271 "r_mbytes_per_sec": 0, 00:11:03.271 "w_mbytes_per_sec": 0 00:11:03.271 }, 00:11:03.271 "claimed": false, 00:11:03.271 "zoned": false, 00:11:03.271 "supported_io_types": { 00:11:03.271 "read": true, 00:11:03.271 "write": true, 00:11:03.271 "unmap": false, 00:11:03.271 "flush": false, 00:11:03.271 "reset": true, 00:11:03.271 "nvme_admin": false, 00:11:03.271 "nvme_io": false, 00:11:03.271 "nvme_io_md": false, 00:11:03.271 "write_zeroes": true, 00:11:03.271 "zcopy": false, 00:11:03.271 "get_zone_info": false, 00:11:03.272 "zone_management": false, 00:11:03.272 "zone_append": false, 00:11:03.272 "compare": false, 00:11:03.272 "compare_and_write": false, 00:11:03.272 "abort": false, 00:11:03.272 "seek_hole": false, 00:11:03.272 "seek_data": false, 00:11:03.272 "copy": false, 00:11:03.272 "nvme_iov_md": false 00:11:03.272 }, 00:11:03.272 "memory_domains": [ 00:11:03.272 { 00:11:03.272 "dma_device_id": "system", 00:11:03.272 "dma_device_type": 1 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.272 "dma_device_type": 2 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "system", 00:11:03.272 "dma_device_type": 1 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.272 "dma_device_type": 2 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "system", 00:11:03.272 "dma_device_type": 1 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.272 "dma_device_type": 2 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "system", 00:11:03.272 "dma_device_type": 1 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.272 "dma_device_type": 2 00:11:03.272 } 00:11:03.272 ], 00:11:03.272 "driver_specific": { 00:11:03.272 "raid": { 00:11:03.272 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:03.272 "strip_size_kb": 0, 00:11:03.272 "state": "online", 00:11:03.272 "raid_level": "raid1", 00:11:03.272 "superblock": true, 00:11:03.272 "num_base_bdevs": 4, 00:11:03.272 "num_base_bdevs_discovered": 4, 00:11:03.272 "num_base_bdevs_operational": 4, 00:11:03.272 "base_bdevs_list": [ 00:11:03.272 { 00:11:03.272 "name": "pt1", 00:11:03.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.272 "is_configured": true, 00:11:03.272 "data_offset": 2048, 00:11:03.272 "data_size": 63488 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "name": "pt2", 00:11:03.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.272 "is_configured": true, 00:11:03.272 "data_offset": 2048, 00:11:03.272 "data_size": 63488 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "name": "pt3", 00:11:03.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.272 "is_configured": true, 00:11:03.272 "data_offset": 2048, 00:11:03.272 "data_size": 63488 00:11:03.272 }, 00:11:03.272 { 00:11:03.272 "name": "pt4", 00:11:03.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.272 "is_configured": true, 00:11:03.272 "data_offset": 2048, 00:11:03.272 "data_size": 63488 00:11:03.272 } 00:11:03.272 ] 00:11:03.272 } 00:11:03.272 } 00:11:03.272 }' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.272 pt2 00:11:03.272 pt3 00:11:03.272 pt4' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.272 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 [2024-11-26 22:55:42.388208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc97e4a9-5e94-49e0-b8bd-4b780d279b0f 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc97e4a9-5e94-49e0-b8bd-4b780d279b0f ']' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 [2024-11-26 22:55:42.431896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.532 [2024-11-26 22:55:42.431925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.532 [2024-11-26 22:55:42.432018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.532 [2024-11-26 22:55:42.432140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.532 [2024-11-26 22:55:42.432167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 [2024-11-26 22:55:42.575990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:03.532 [2024-11-26 22:55:42.578170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:03.532 [2024-11-26 22:55:42.578227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:03.532 [2024-11-26 22:55:42.578274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:03.532 [2024-11-26 22:55:42.578341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:03.532 [2024-11-26 22:55:42.578393] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:03.532 [2024-11-26 22:55:42.578414] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:03.532 [2024-11-26 22:55:42.578436] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:03.532 [2024-11-26 22:55:42.578451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.532 [2024-11-26 22:55:42.578463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:03.532 request: 00:11:03.532 { 00:11:03.532 "name": "raid_bdev1", 00:11:03.532 "raid_level": "raid1", 00:11:03.532 "base_bdevs": [ 00:11:03.532 "malloc1", 00:11:03.532 "malloc2", 00:11:03.532 "malloc3", 00:11:03.532 "malloc4" 00:11:03.532 ], 00:11:03.532 "superblock": false, 00:11:03.532 "method": "bdev_raid_create", 00:11:03.532 "req_id": 1 00:11:03.532 } 00:11:03.532 Got JSON-RPC error response 00:11:03.532 response: 00:11:03.532 { 00:11:03.532 "code": -17, 00:11:03.532 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:03.532 } 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.532 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.533 [2024-11-26 22:55:42.639991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.533 [2024-11-26 22:55:42.640049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.533 [2024-11-26 22:55:42.640067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.533 [2024-11-26 22:55:42.640081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.533 [2024-11-26 22:55:42.642557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.533 [2024-11-26 22:55:42.642596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.533 [2024-11-26 22:55:42.642671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:03.533 [2024-11-26 22:55:42.642712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.533 pt1 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.533 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.792 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.792 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.792 "name": "raid_bdev1", 00:11:03.792 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:03.792 "strip_size_kb": 0, 00:11:03.792 "state": "configuring", 00:11:03.792 "raid_level": "raid1", 00:11:03.792 "superblock": true, 00:11:03.792 "num_base_bdevs": 4, 00:11:03.792 "num_base_bdevs_discovered": 1, 00:11:03.792 "num_base_bdevs_operational": 4, 00:11:03.792 "base_bdevs_list": [ 00:11:03.792 { 00:11:03.792 "name": "pt1", 00:11:03.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.792 "is_configured": true, 00:11:03.792 "data_offset": 2048, 00:11:03.792 "data_size": 63488 00:11:03.792 }, 00:11:03.792 { 00:11:03.792 "name": null, 00:11:03.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.792 "is_configured": false, 00:11:03.792 "data_offset": 2048, 00:11:03.792 "data_size": 63488 00:11:03.792 }, 00:11:03.792 { 00:11:03.792 "name": null, 00:11:03.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.792 "is_configured": false, 00:11:03.792 "data_offset": 2048, 00:11:03.792 "data_size": 63488 00:11:03.792 }, 00:11:03.792 { 00:11:03.792 "name": null, 00:11:03.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.792 "is_configured": false, 00:11:03.792 "data_offset": 2048, 00:11:03.792 "data_size": 63488 00:11:03.792 } 00:11:03.792 ] 00:11:03.792 }' 00:11:03.792 22:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.792 22:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.052 [2024-11-26 22:55:43.036075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.052 [2024-11-26 22:55:43.036143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.052 [2024-11-26 22:55:43.036165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:04.052 [2024-11-26 22:55:43.036179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.052 [2024-11-26 22:55:43.036577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.052 [2024-11-26 22:55:43.036599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.052 [2024-11-26 22:55:43.036671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.052 [2024-11-26 22:55:43.036696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.052 pt2 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.052 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.052 [2024-11-26 22:55:43.048083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.053 "name": "raid_bdev1", 00:11:04.053 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:04.053 "strip_size_kb": 0, 00:11:04.053 "state": "configuring", 00:11:04.053 "raid_level": "raid1", 00:11:04.053 "superblock": true, 00:11:04.053 "num_base_bdevs": 4, 00:11:04.053 "num_base_bdevs_discovered": 1, 00:11:04.053 "num_base_bdevs_operational": 4, 00:11:04.053 "base_bdevs_list": [ 00:11:04.053 { 00:11:04.053 "name": "pt1", 00:11:04.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.053 "is_configured": true, 00:11:04.053 "data_offset": 2048, 00:11:04.053 "data_size": 63488 00:11:04.053 }, 00:11:04.053 { 00:11:04.053 "name": null, 00:11:04.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.053 "is_configured": false, 00:11:04.053 "data_offset": 0, 00:11:04.053 "data_size": 63488 00:11:04.053 }, 00:11:04.053 { 00:11:04.053 "name": null, 00:11:04.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.053 "is_configured": false, 00:11:04.053 "data_offset": 2048, 00:11:04.053 "data_size": 63488 00:11:04.053 }, 00:11:04.053 { 00:11:04.053 "name": null, 00:11:04.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.053 "is_configured": false, 00:11:04.053 "data_offset": 2048, 00:11:04.053 "data_size": 63488 00:11:04.053 } 00:11:04.053 ] 00:11:04.053 }' 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.053 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.621 [2024-11-26 22:55:43.492194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.621 [2024-11-26 22:55:43.492270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.621 [2024-11-26 22:55:43.492295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:04.621 [2024-11-26 22:55:43.492306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.621 [2024-11-26 22:55:43.492735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.621 [2024-11-26 22:55:43.492754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.621 [2024-11-26 22:55:43.492834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.621 [2024-11-26 22:55:43.492856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.621 pt2 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.621 [2024-11-26 22:55:43.500191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.621 [2024-11-26 22:55:43.500243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.621 [2024-11-26 22:55:43.500275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:04.621 [2024-11-26 22:55:43.500285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.621 [2024-11-26 22:55:43.500666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.621 [2024-11-26 22:55:43.500691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.621 [2024-11-26 22:55:43.500756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.621 [2024-11-26 22:55:43.500775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.621 pt3 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.621 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.621 [2024-11-26 22:55:43.508189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:04.621 [2024-11-26 22:55:43.508233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.621 [2024-11-26 22:55:43.508264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:04.621 [2024-11-26 22:55:43.508275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.621 [2024-11-26 22:55:43.508625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.621 [2024-11-26 22:55:43.508653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:04.621 [2024-11-26 22:55:43.508716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:04.622 [2024-11-26 22:55:43.508765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:04.622 [2024-11-26 22:55:43.508885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:04.622 [2024-11-26 22:55:43.508894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.622 [2024-11-26 22:55:43.509157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:04.622 [2024-11-26 22:55:43.509346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:04.622 [2024-11-26 22:55:43.509372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:04.622 [2024-11-26 22:55:43.509488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.622 pt4 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.622 "name": "raid_bdev1", 00:11:04.622 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:04.622 "strip_size_kb": 0, 00:11:04.622 "state": "online", 00:11:04.622 "raid_level": "raid1", 00:11:04.622 "superblock": true, 00:11:04.622 "num_base_bdevs": 4, 00:11:04.622 "num_base_bdevs_discovered": 4, 00:11:04.622 "num_base_bdevs_operational": 4, 00:11:04.622 "base_bdevs_list": [ 00:11:04.622 { 00:11:04.622 "name": "pt1", 00:11:04.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.622 "is_configured": true, 00:11:04.622 "data_offset": 2048, 00:11:04.622 "data_size": 63488 00:11:04.622 }, 00:11:04.622 { 00:11:04.622 "name": "pt2", 00:11:04.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.622 "is_configured": true, 00:11:04.622 "data_offset": 2048, 00:11:04.622 "data_size": 63488 00:11:04.622 }, 00:11:04.622 { 00:11:04.622 "name": "pt3", 00:11:04.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.622 "is_configured": true, 00:11:04.622 "data_offset": 2048, 00:11:04.622 "data_size": 63488 00:11:04.622 }, 00:11:04.622 { 00:11:04.622 "name": "pt4", 00:11:04.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.622 "is_configured": true, 00:11:04.622 "data_offset": 2048, 00:11:04.622 "data_size": 63488 00:11:04.622 } 00:11:04.622 ] 00:11:04.622 }' 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.622 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.882 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.882 [2024-11-26 22:55:43.976644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.882 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.882 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.882 "name": "raid_bdev1", 00:11:04.882 "aliases": [ 00:11:04.882 "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f" 00:11:04.882 ], 00:11:04.882 "product_name": "Raid Volume", 00:11:04.882 "block_size": 512, 00:11:04.882 "num_blocks": 63488, 00:11:04.882 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:04.882 "assigned_rate_limits": { 00:11:04.882 "rw_ios_per_sec": 0, 00:11:04.882 "rw_mbytes_per_sec": 0, 00:11:04.882 "r_mbytes_per_sec": 0, 00:11:04.882 "w_mbytes_per_sec": 0 00:11:04.882 }, 00:11:04.882 "claimed": false, 00:11:04.882 "zoned": false, 00:11:04.882 "supported_io_types": { 00:11:04.882 "read": true, 00:11:04.882 "write": true, 00:11:04.882 "unmap": false, 00:11:04.882 "flush": false, 00:11:04.882 "reset": true, 00:11:04.882 "nvme_admin": false, 00:11:04.882 "nvme_io": false, 00:11:04.882 "nvme_io_md": false, 00:11:04.882 "write_zeroes": true, 00:11:04.882 "zcopy": false, 00:11:04.882 "get_zone_info": false, 00:11:04.882 "zone_management": false, 00:11:04.882 "zone_append": false, 00:11:04.882 "compare": false, 00:11:04.882 "compare_and_write": false, 00:11:04.882 "abort": false, 00:11:04.882 "seek_hole": false, 00:11:04.882 "seek_data": false, 00:11:04.882 "copy": false, 00:11:04.882 "nvme_iov_md": false 00:11:04.882 }, 00:11:04.882 "memory_domains": [ 00:11:04.882 { 00:11:04.882 "dma_device_id": "system", 00:11:04.882 "dma_device_type": 1 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.882 "dma_device_type": 2 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "system", 00:11:04.882 "dma_device_type": 1 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.882 "dma_device_type": 2 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "system", 00:11:04.882 "dma_device_type": 1 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.882 "dma_device_type": 2 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "system", 00:11:04.882 "dma_device_type": 1 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.882 "dma_device_type": 2 00:11:04.882 } 00:11:04.882 ], 00:11:04.882 "driver_specific": { 00:11:04.882 "raid": { 00:11:04.882 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:04.882 "strip_size_kb": 0, 00:11:04.882 "state": "online", 00:11:04.882 "raid_level": "raid1", 00:11:04.882 "superblock": true, 00:11:04.882 "num_base_bdevs": 4, 00:11:04.882 "num_base_bdevs_discovered": 4, 00:11:04.882 "num_base_bdevs_operational": 4, 00:11:04.882 "base_bdevs_list": [ 00:11:04.882 { 00:11:04.882 "name": "pt1", 00:11:04.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.882 "is_configured": true, 00:11:04.882 "data_offset": 2048, 00:11:04.882 "data_size": 63488 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "name": "pt2", 00:11:04.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.882 "is_configured": true, 00:11:04.882 "data_offset": 2048, 00:11:04.882 "data_size": 63488 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "name": "pt3", 00:11:04.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.882 "is_configured": true, 00:11:04.882 "data_offset": 2048, 00:11:04.882 "data_size": 63488 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "name": "pt4", 00:11:04.882 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.882 "is_configured": true, 00:11:04.882 "data_offset": 2048, 00:11:04.882 "data_size": 63488 00:11:04.882 } 00:11:04.882 ] 00:11:04.882 } 00:11:04.882 } 00:11:04.882 }' 00:11:04.882 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.142 pt2 00:11:05.142 pt3 00:11:05.142 pt4' 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.142 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.143 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.402 [2024-11-26 22:55:44.276716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc97e4a9-5e94-49e0-b8bd-4b780d279b0f '!=' dc97e4a9-5e94-49e0-b8bd-4b780d279b0f ']' 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.402 [2024-11-26 22:55:44.320479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.402 "name": "raid_bdev1", 00:11:05.402 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:05.402 "strip_size_kb": 0, 00:11:05.402 "state": "online", 00:11:05.402 "raid_level": "raid1", 00:11:05.402 "superblock": true, 00:11:05.402 "num_base_bdevs": 4, 00:11:05.402 "num_base_bdevs_discovered": 3, 00:11:05.402 "num_base_bdevs_operational": 3, 00:11:05.402 "base_bdevs_list": [ 00:11:05.402 { 00:11:05.402 "name": null, 00:11:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.402 "is_configured": false, 00:11:05.402 "data_offset": 0, 00:11:05.402 "data_size": 63488 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "name": "pt2", 00:11:05.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.402 "is_configured": true, 00:11:05.402 "data_offset": 2048, 00:11:05.402 "data_size": 63488 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "name": "pt3", 00:11:05.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.402 "is_configured": true, 00:11:05.402 "data_offset": 2048, 00:11:05.402 "data_size": 63488 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "name": "pt4", 00:11:05.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.402 "is_configured": true, 00:11:05.402 "data_offset": 2048, 00:11:05.402 "data_size": 63488 00:11:05.402 } 00:11:05.402 ] 00:11:05.402 }' 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.402 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.662 [2024-11-26 22:55:44.768554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.662 [2024-11-26 22:55:44.768591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.662 [2024-11-26 22:55:44.768678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.662 [2024-11-26 22:55:44.768764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.662 [2024-11-26 22:55:44.768782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:05.662 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 [2024-11-26 22:55:44.864569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.922 [2024-11-26 22:55:44.864622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.922 [2024-11-26 22:55:44.864644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:05.922 [2024-11-26 22:55:44.864655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.922 [2024-11-26 22:55:44.867182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.922 [2024-11-26 22:55:44.867223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.922 [2024-11-26 22:55:44.867323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.922 [2024-11-26 22:55:44.867368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.922 pt2 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.922 "name": "raid_bdev1", 00:11:05.922 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:05.922 "strip_size_kb": 0, 00:11:05.922 "state": "configuring", 00:11:05.922 "raid_level": "raid1", 00:11:05.922 "superblock": true, 00:11:05.922 "num_base_bdevs": 4, 00:11:05.922 "num_base_bdevs_discovered": 1, 00:11:05.922 "num_base_bdevs_operational": 3, 00:11:05.922 "base_bdevs_list": [ 00:11:05.922 { 00:11:05.922 "name": null, 00:11:05.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.922 "is_configured": false, 00:11:05.922 "data_offset": 2048, 00:11:05.922 "data_size": 63488 00:11:05.922 }, 00:11:05.922 { 00:11:05.922 "name": "pt2", 00:11:05.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.922 "is_configured": true, 00:11:05.922 "data_offset": 2048, 00:11:05.922 "data_size": 63488 00:11:05.922 }, 00:11:05.922 { 00:11:05.922 "name": null, 00:11:05.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.922 "is_configured": false, 00:11:05.922 "data_offset": 2048, 00:11:05.922 "data_size": 63488 00:11:05.922 }, 00:11:05.922 { 00:11:05.922 "name": null, 00:11:05.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.922 "is_configured": false, 00:11:05.922 "data_offset": 2048, 00:11:05.922 "data_size": 63488 00:11:05.922 } 00:11:05.922 ] 00:11:05.922 }' 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.922 22:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.269 [2024-11-26 22:55:45.328750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.269 [2024-11-26 22:55:45.328810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.269 [2024-11-26 22:55:45.328837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:06.269 [2024-11-26 22:55:45.328849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.269 [2024-11-26 22:55:45.329324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.269 [2024-11-26 22:55:45.329345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.269 [2024-11-26 22:55:45.329434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.269 [2024-11-26 22:55:45.329460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.269 pt3 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.269 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.543 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.543 "name": "raid_bdev1", 00:11:06.543 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:06.543 "strip_size_kb": 0, 00:11:06.543 "state": "configuring", 00:11:06.543 "raid_level": "raid1", 00:11:06.543 "superblock": true, 00:11:06.543 "num_base_bdevs": 4, 00:11:06.543 "num_base_bdevs_discovered": 2, 00:11:06.543 "num_base_bdevs_operational": 3, 00:11:06.543 "base_bdevs_list": [ 00:11:06.543 { 00:11:06.543 "name": null, 00:11:06.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.543 "is_configured": false, 00:11:06.543 "data_offset": 2048, 00:11:06.543 "data_size": 63488 00:11:06.543 }, 00:11:06.543 { 00:11:06.543 "name": "pt2", 00:11:06.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.543 "is_configured": true, 00:11:06.543 "data_offset": 2048, 00:11:06.543 "data_size": 63488 00:11:06.543 }, 00:11:06.543 { 00:11:06.543 "name": "pt3", 00:11:06.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.543 "is_configured": true, 00:11:06.543 "data_offset": 2048, 00:11:06.543 "data_size": 63488 00:11:06.543 }, 00:11:06.543 { 00:11:06.543 "name": null, 00:11:06.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.543 "is_configured": false, 00:11:06.543 "data_offset": 2048, 00:11:06.543 "data_size": 63488 00:11:06.543 } 00:11:06.543 ] 00:11:06.543 }' 00:11:06.543 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.543 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.803 [2024-11-26 22:55:45.760842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:06.803 [2024-11-26 22:55:45.760909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.803 [2024-11-26 22:55:45.760937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:06.803 [2024-11-26 22:55:45.760947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.803 [2024-11-26 22:55:45.761396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.803 [2024-11-26 22:55:45.761415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:06.803 [2024-11-26 22:55:45.761495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:06.803 [2024-11-26 22:55:45.761523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:06.803 [2024-11-26 22:55:45.761652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.803 [2024-11-26 22:55:45.761661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.803 [2024-11-26 22:55:45.761918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:06.803 [2024-11-26 22:55:45.762059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.803 [2024-11-26 22:55:45.762074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:06.803 [2024-11-26 22:55:45.762187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.803 pt4 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.803 "name": "raid_bdev1", 00:11:06.803 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:06.803 "strip_size_kb": 0, 00:11:06.803 "state": "online", 00:11:06.803 "raid_level": "raid1", 00:11:06.803 "superblock": true, 00:11:06.803 "num_base_bdevs": 4, 00:11:06.803 "num_base_bdevs_discovered": 3, 00:11:06.803 "num_base_bdevs_operational": 3, 00:11:06.803 "base_bdevs_list": [ 00:11:06.803 { 00:11:06.803 "name": null, 00:11:06.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.803 "is_configured": false, 00:11:06.803 "data_offset": 2048, 00:11:06.803 "data_size": 63488 00:11:06.803 }, 00:11:06.803 { 00:11:06.803 "name": "pt2", 00:11:06.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.803 "is_configured": true, 00:11:06.803 "data_offset": 2048, 00:11:06.803 "data_size": 63488 00:11:06.803 }, 00:11:06.803 { 00:11:06.803 "name": "pt3", 00:11:06.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.803 "is_configured": true, 00:11:06.803 "data_offset": 2048, 00:11:06.803 "data_size": 63488 00:11:06.803 }, 00:11:06.803 { 00:11:06.803 "name": "pt4", 00:11:06.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.803 "is_configured": true, 00:11:06.803 "data_offset": 2048, 00:11:06.803 "data_size": 63488 00:11:06.803 } 00:11:06.803 ] 00:11:06.803 }' 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.803 22:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.063 [2024-11-26 22:55:46.160964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.063 [2024-11-26 22:55:46.160999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.063 [2024-11-26 22:55:46.161082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.063 [2024-11-26 22:55:46.161175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.063 [2024-11-26 22:55:46.161197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.063 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.322 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.322 [2024-11-26 22:55:46.232961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.323 [2024-11-26 22:55:46.233023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.323 [2024-11-26 22:55:46.233043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:07.323 [2024-11-26 22:55:46.233056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.323 [2024-11-26 22:55:46.235550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.323 [2024-11-26 22:55:46.235593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.323 [2024-11-26 22:55:46.235668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.323 [2024-11-26 22:55:46.235710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.323 [2024-11-26 22:55:46.235846] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:07.323 [2024-11-26 22:55:46.235869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.323 [2024-11-26 22:55:46.235885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:11:07.323 [2024-11-26 22:55:46.235933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.323 [2024-11-26 22:55:46.236033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.323 pt1 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.323 "name": "raid_bdev1", 00:11:07.323 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:07.323 "strip_size_kb": 0, 00:11:07.323 "state": "configuring", 00:11:07.323 "raid_level": "raid1", 00:11:07.323 "superblock": true, 00:11:07.323 "num_base_bdevs": 4, 00:11:07.323 "num_base_bdevs_discovered": 2, 00:11:07.323 "num_base_bdevs_operational": 3, 00:11:07.323 "base_bdevs_list": [ 00:11:07.323 { 00:11:07.323 "name": null, 00:11:07.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.323 "is_configured": false, 00:11:07.323 "data_offset": 2048, 00:11:07.323 "data_size": 63488 00:11:07.323 }, 00:11:07.323 { 00:11:07.323 "name": "pt2", 00:11:07.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.323 "is_configured": true, 00:11:07.323 "data_offset": 2048, 00:11:07.323 "data_size": 63488 00:11:07.323 }, 00:11:07.323 { 00:11:07.323 "name": "pt3", 00:11:07.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.323 "is_configured": true, 00:11:07.323 "data_offset": 2048, 00:11:07.323 "data_size": 63488 00:11:07.323 }, 00:11:07.323 { 00:11:07.323 "name": null, 00:11:07.323 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.323 "is_configured": false, 00:11:07.323 "data_offset": 2048, 00:11:07.323 "data_size": 63488 00:11:07.323 } 00:11:07.323 ] 00:11:07.323 }' 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.323 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.583 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:07.583 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:07.583 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.583 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.844 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.844 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:07.844 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.844 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.844 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.844 [2024-11-26 22:55:46.745076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.844 [2024-11-26 22:55:46.745135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.844 [2024-11-26 22:55:46.745159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:07.844 [2024-11-26 22:55:46.745170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.844 [2024-11-26 22:55:46.745613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.844 [2024-11-26 22:55:46.745633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.844 [2024-11-26 22:55:46.745711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:07.844 [2024-11-26 22:55:46.745735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.844 [2024-11-26 22:55:46.745850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:07.844 [2024-11-26 22:55:46.745859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.844 [2024-11-26 22:55:46.746122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:07.845 [2024-11-26 22:55:46.746285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:07.845 [2024-11-26 22:55:46.746309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:07.845 [2024-11-26 22:55:46.746437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.845 pt4 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.845 "name": "raid_bdev1", 00:11:07.845 "uuid": "dc97e4a9-5e94-49e0-b8bd-4b780d279b0f", 00:11:07.845 "strip_size_kb": 0, 00:11:07.845 "state": "online", 00:11:07.845 "raid_level": "raid1", 00:11:07.845 "superblock": true, 00:11:07.845 "num_base_bdevs": 4, 00:11:07.845 "num_base_bdevs_discovered": 3, 00:11:07.845 "num_base_bdevs_operational": 3, 00:11:07.845 "base_bdevs_list": [ 00:11:07.845 { 00:11:07.845 "name": null, 00:11:07.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.845 "is_configured": false, 00:11:07.845 "data_offset": 2048, 00:11:07.845 "data_size": 63488 00:11:07.845 }, 00:11:07.845 { 00:11:07.845 "name": "pt2", 00:11:07.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.845 "is_configured": true, 00:11:07.845 "data_offset": 2048, 00:11:07.845 "data_size": 63488 00:11:07.845 }, 00:11:07.845 { 00:11:07.845 "name": "pt3", 00:11:07.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.845 "is_configured": true, 00:11:07.845 "data_offset": 2048, 00:11:07.845 "data_size": 63488 00:11:07.845 }, 00:11:07.845 { 00:11:07.845 "name": "pt4", 00:11:07.845 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.845 "is_configured": true, 00:11:07.845 "data_offset": 2048, 00:11:07.845 "data_size": 63488 00:11:07.845 } 00:11:07.845 ] 00:11:07.845 }' 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.845 22:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.106 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.106 [2024-11-26 22:55:47.213600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' dc97e4a9-5e94-49e0-b8bd-4b780d279b0f '!=' dc97e4a9-5e94-49e0-b8bd-4b780d279b0f ']' 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86899 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 86899 ']' 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 86899 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86899 00:11:08.365 killing process with pid 86899 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86899' 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 86899 00:11:08.365 [2024-11-26 22:55:47.276034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.365 [2024-11-26 22:55:47.276160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.365 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 86899 00:11:08.365 [2024-11-26 22:55:47.276270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.365 [2024-11-26 22:55:47.276287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:08.365 [2024-11-26 22:55:47.356616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.624 22:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:08.624 00:11:08.624 real 0m7.144s 00:11:08.624 user 0m11.799s 00:11:08.624 sys 0m1.616s 00:11:08.624 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.624 22:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.624 ************************************ 00:11:08.624 END TEST raid_superblock_test 00:11:08.624 ************************************ 00:11:08.624 22:55:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:08.624 22:55:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.624 22:55:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.624 22:55:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.882 ************************************ 00:11:08.882 START TEST raid_read_error_test 00:11:08.882 ************************************ 00:11:08.882 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KpIpYWnSxi 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87376 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87376 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87376 ']' 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.883 22:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 [2024-11-26 22:55:47.863046] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:08.883 [2024-11-26 22:55:47.863181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87376 ] 00:11:08.883 [2024-11-26 22:55:47.999613] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:09.143 [2024-11-26 22:55:48.035736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.143 [2024-11-26 22:55:48.075355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.143 [2024-11-26 22:55:48.151808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.143 [2024-11-26 22:55:48.151865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 BaseBdev1_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 true 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 [2024-11-26 22:55:48.733997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.711 [2024-11-26 22:55:48.734097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.711 [2024-11-26 22:55:48.734118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.711 [2024-11-26 22:55:48.734135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.711 [2024-11-26 22:55:48.736619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.711 [2024-11-26 22:55:48.736680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.711 BaseBdev1 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 BaseBdev2_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 true 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 [2024-11-26 22:55:48.780591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.711 [2024-11-26 22:55:48.780665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.711 [2024-11-26 22:55:48.780683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.711 [2024-11-26 22:55:48.780696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.711 [2024-11-26 22:55:48.783150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.711 [2024-11-26 22:55:48.783212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.711 BaseBdev2 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 BaseBdev3_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 true 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.711 [2024-11-26 22:55:48.827123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.711 [2024-11-26 22:55:48.827183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.711 [2024-11-26 22:55:48.827203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.711 [2024-11-26 22:55:48.827217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.711 [2024-11-26 22:55:48.829632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.711 [2024-11-26 22:55:48.829671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.711 BaseBdev3 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.711 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.971 BaseBdev4_malloc 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.971 true 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.971 [2024-11-26 22:55:48.893006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:09.971 [2024-11-26 22:55:48.893075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.971 [2024-11-26 22:55:48.893098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.971 [2024-11-26 22:55:48.893115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.971 [2024-11-26 22:55:48.895821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.971 [2024-11-26 22:55:48.895873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.971 BaseBdev4 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.971 [2024-11-26 22:55:48.905034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.971 [2024-11-26 22:55:48.907152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.971 [2024-11-26 22:55:48.907236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.971 [2024-11-26 22:55:48.907307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.971 [2024-11-26 22:55:48.907571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.971 [2024-11-26 22:55:48.907606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.971 [2024-11-26 22:55:48.907860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:09.971 [2024-11-26 22:55:48.908052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.971 [2024-11-26 22:55:48.908072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:09.971 [2024-11-26 22:55:48.908210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.971 "name": "raid_bdev1", 00:11:09.971 "uuid": "7d00f8ab-0500-4781-9673-78a175235bcd", 00:11:09.971 "strip_size_kb": 0, 00:11:09.971 "state": "online", 00:11:09.971 "raid_level": "raid1", 00:11:09.971 "superblock": true, 00:11:09.971 "num_base_bdevs": 4, 00:11:09.971 "num_base_bdevs_discovered": 4, 00:11:09.971 "num_base_bdevs_operational": 4, 00:11:09.971 "base_bdevs_list": [ 00:11:09.971 { 00:11:09.971 "name": "BaseBdev1", 00:11:09.971 "uuid": "b4ebaacc-2cf9-5f4d-806c-ebfdd92e30a3", 00:11:09.971 "is_configured": true, 00:11:09.971 "data_offset": 2048, 00:11:09.971 "data_size": 63488 00:11:09.971 }, 00:11:09.971 { 00:11:09.971 "name": "BaseBdev2", 00:11:09.971 "uuid": "6c3a3884-fafc-569c-a94f-c47c6604054d", 00:11:09.971 "is_configured": true, 00:11:09.971 "data_offset": 2048, 00:11:09.971 "data_size": 63488 00:11:09.971 }, 00:11:09.971 { 00:11:09.971 "name": "BaseBdev3", 00:11:09.971 "uuid": "c3b518a8-71b6-5dcb-aa18-844632989e45", 00:11:09.971 "is_configured": true, 00:11:09.971 "data_offset": 2048, 00:11:09.971 "data_size": 63488 00:11:09.971 }, 00:11:09.971 { 00:11:09.971 "name": "BaseBdev4", 00:11:09.971 "uuid": "b5ee824b-4087-519c-aeba-26860ded066a", 00:11:09.971 "is_configured": true, 00:11:09.971 "data_offset": 2048, 00:11:09.971 "data_size": 63488 00:11:09.971 } 00:11:09.971 ] 00:11:09.971 }' 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.971 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.539 22:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.539 22:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.539 [2024-11-26 22:55:49.489604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.477 "name": "raid_bdev1", 00:11:11.477 "uuid": "7d00f8ab-0500-4781-9673-78a175235bcd", 00:11:11.477 "strip_size_kb": 0, 00:11:11.477 "state": "online", 00:11:11.477 "raid_level": "raid1", 00:11:11.477 "superblock": true, 00:11:11.477 "num_base_bdevs": 4, 00:11:11.477 "num_base_bdevs_discovered": 4, 00:11:11.477 "num_base_bdevs_operational": 4, 00:11:11.477 "base_bdevs_list": [ 00:11:11.477 { 00:11:11.477 "name": "BaseBdev1", 00:11:11.477 "uuid": "b4ebaacc-2cf9-5f4d-806c-ebfdd92e30a3", 00:11:11.477 "is_configured": true, 00:11:11.477 "data_offset": 2048, 00:11:11.477 "data_size": 63488 00:11:11.477 }, 00:11:11.477 { 00:11:11.477 "name": "BaseBdev2", 00:11:11.477 "uuid": "6c3a3884-fafc-569c-a94f-c47c6604054d", 00:11:11.477 "is_configured": true, 00:11:11.477 "data_offset": 2048, 00:11:11.477 "data_size": 63488 00:11:11.477 }, 00:11:11.477 { 00:11:11.477 "name": "BaseBdev3", 00:11:11.477 "uuid": "c3b518a8-71b6-5dcb-aa18-844632989e45", 00:11:11.477 "is_configured": true, 00:11:11.477 "data_offset": 2048, 00:11:11.477 "data_size": 63488 00:11:11.477 }, 00:11:11.477 { 00:11:11.477 "name": "BaseBdev4", 00:11:11.477 "uuid": "b5ee824b-4087-519c-aeba-26860ded066a", 00:11:11.477 "is_configured": true, 00:11:11.477 "data_offset": 2048, 00:11:11.477 "data_size": 63488 00:11:11.477 } 00:11:11.477 ] 00:11:11.477 }' 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.477 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.736 [2024-11-26 22:55:50.849389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.736 [2024-11-26 22:55:50.849437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.736 [2024-11-26 22:55:50.852224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.736 [2024-11-26 22:55:50.852312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.736 [2024-11-26 22:55:50.852445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.736 [2024-11-26 22:55:50.852460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:11.736 { 00:11:11.736 "results": [ 00:11:11.736 { 00:11:11.736 "job": "raid_bdev1", 00:11:11.736 "core_mask": "0x1", 00:11:11.736 "workload": "randrw", 00:11:11.736 "percentage": 50, 00:11:11.736 "status": "finished", 00:11:11.736 "queue_depth": 1, 00:11:11.736 "io_size": 131072, 00:11:11.736 "runtime": 1.357887, 00:11:11.736 "iops": 8497.761595773432, 00:11:11.736 "mibps": 1062.220199471679, 00:11:11.736 "io_failed": 0, 00:11:11.736 "io_timeout": 0, 00:11:11.736 "avg_latency_us": 114.88435094737851, 00:11:11.736 "min_latency_us": 23.428920073215377, 00:11:11.736 "max_latency_us": 1478.0301577617013 00:11:11.736 } 00:11:11.736 ], 00:11:11.736 "core_count": 1 00:11:11.736 } 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87376 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87376 ']' 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87376 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.736 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87376 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87376' 00:11:11.995 killing process with pid 87376 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87376 00:11:11.995 [2024-11-26 22:55:50.882728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.995 22:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87376 00:11:11.995 [2024-11-26 22:55:50.946244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KpIpYWnSxi 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:12.254 00:11:12.254 real 0m3.532s 00:11:12.254 user 0m4.308s 00:11:12.254 sys 0m0.664s 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.254 22:55:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.254 ************************************ 00:11:12.254 END TEST raid_read_error_test 00:11:12.254 ************************************ 00:11:12.254 22:55:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:12.254 22:55:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.254 22:55:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.254 22:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.255 ************************************ 00:11:12.255 START TEST raid_write_error_test 00:11:12.255 ************************************ 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:12.255 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sjJ9iftqt1 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87511 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87511 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87511 ']' 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.513 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.513 [2024-11-26 22:55:51.469317] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:12.513 [2024-11-26 22:55:51.469444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87511 ] 00:11:12.513 [2024-11-26 22:55:51.605039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:12.772 [2024-11-26 22:55:51.645653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.772 [2024-11-26 22:55:51.684180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.772 [2024-11-26 22:55:51.760342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.772 [2024-11-26 22:55:51.760400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 BaseBdev1_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 true 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 [2024-11-26 22:55:52.338519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.342 [2024-11-26 22:55:52.338588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.342 [2024-11-26 22:55:52.338615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.342 [2024-11-26 22:55:52.338634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.342 [2024-11-26 22:55:52.341089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.342 [2024-11-26 22:55:52.341132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.342 BaseBdev1 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 BaseBdev2_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 true 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 [2024-11-26 22:55:52.385316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.342 [2024-11-26 22:55:52.385375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.342 [2024-11-26 22:55:52.385394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.342 [2024-11-26 22:55:52.385407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.342 [2024-11-26 22:55:52.387850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.342 [2024-11-26 22:55:52.387896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.342 BaseBdev2 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 BaseBdev3_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 true 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.342 [2024-11-26 22:55:52.431819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.342 [2024-11-26 22:55:52.431880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.342 [2024-11-26 22:55:52.431916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.342 [2024-11-26 22:55:52.431930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.342 [2024-11-26 22:55:52.434283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.342 [2024-11-26 22:55:52.434324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.342 BaseBdev3 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.342 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.602 BaseBdev4_malloc 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.602 true 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.602 [2024-11-26 22:55:52.494749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.602 [2024-11-26 22:55:52.494825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.602 [2024-11-26 22:55:52.494850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.602 [2024-11-26 22:55:52.494865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.602 [2024-11-26 22:55:52.497550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.602 [2024-11-26 22:55:52.497603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.602 BaseBdev4 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.602 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.602 [2024-11-26 22:55:52.506778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.602 [2024-11-26 22:55:52.508916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.602 [2024-11-26 22:55:52.509001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.602 [2024-11-26 22:55:52.509077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.602 [2024-11-26 22:55:52.509342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.602 [2024-11-26 22:55:52.509372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.602 [2024-11-26 22:55:52.509657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:13.602 [2024-11-26 22:55:52.509835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.603 [2024-11-26 22:55:52.509854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:13.603 [2024-11-26 22:55:52.510013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.603 "name": "raid_bdev1", 00:11:13.603 "uuid": "0335b6a1-7c54-46ac-8acc-3ec58f7ae45c", 00:11:13.603 "strip_size_kb": 0, 00:11:13.603 "state": "online", 00:11:13.603 "raid_level": "raid1", 00:11:13.603 "superblock": true, 00:11:13.603 "num_base_bdevs": 4, 00:11:13.603 "num_base_bdevs_discovered": 4, 00:11:13.603 "num_base_bdevs_operational": 4, 00:11:13.603 "base_bdevs_list": [ 00:11:13.603 { 00:11:13.603 "name": "BaseBdev1", 00:11:13.603 "uuid": "3d295913-8a8b-5439-8e0c-1a916ed991e2", 00:11:13.603 "is_configured": true, 00:11:13.603 "data_offset": 2048, 00:11:13.603 "data_size": 63488 00:11:13.603 }, 00:11:13.603 { 00:11:13.603 "name": "BaseBdev2", 00:11:13.603 "uuid": "08dd779e-47d0-5372-8ea2-8cb766559a3c", 00:11:13.603 "is_configured": true, 00:11:13.603 "data_offset": 2048, 00:11:13.603 "data_size": 63488 00:11:13.603 }, 00:11:13.603 { 00:11:13.603 "name": "BaseBdev3", 00:11:13.603 "uuid": "e42fbbeb-8e42-553c-b320-99ea8ba3c22c", 00:11:13.603 "is_configured": true, 00:11:13.603 "data_offset": 2048, 00:11:13.603 "data_size": 63488 00:11:13.603 }, 00:11:13.603 { 00:11:13.603 "name": "BaseBdev4", 00:11:13.603 "uuid": "08959f82-299b-5c54-94e9-7286f6928b9f", 00:11:13.603 "is_configured": true, 00:11:13.603 "data_offset": 2048, 00:11:13.603 "data_size": 63488 00:11:13.603 } 00:11:13.603 ] 00:11:13.603 }' 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.603 22:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.862 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.862 22:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.122 [2024-11-26 22:55:53.027432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.060 [2024-11-26 22:55:53.938919] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:15.060 [2024-11-26 22:55:53.938992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.060 [2024-11-26 22:55:53.939241] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006e50 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.060 "name": "raid_bdev1", 00:11:15.060 "uuid": "0335b6a1-7c54-46ac-8acc-3ec58f7ae45c", 00:11:15.060 "strip_size_kb": 0, 00:11:15.060 "state": "online", 00:11:15.060 "raid_level": "raid1", 00:11:15.060 "superblock": true, 00:11:15.060 "num_base_bdevs": 4, 00:11:15.060 "num_base_bdevs_discovered": 3, 00:11:15.060 "num_base_bdevs_operational": 3, 00:11:15.060 "base_bdevs_list": [ 00:11:15.060 { 00:11:15.060 "name": null, 00:11:15.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.060 "is_configured": false, 00:11:15.060 "data_offset": 0, 00:11:15.060 "data_size": 63488 00:11:15.060 }, 00:11:15.060 { 00:11:15.060 "name": "BaseBdev2", 00:11:15.060 "uuid": "08dd779e-47d0-5372-8ea2-8cb766559a3c", 00:11:15.060 "is_configured": true, 00:11:15.060 "data_offset": 2048, 00:11:15.060 "data_size": 63488 00:11:15.060 }, 00:11:15.060 { 00:11:15.060 "name": "BaseBdev3", 00:11:15.060 "uuid": "e42fbbeb-8e42-553c-b320-99ea8ba3c22c", 00:11:15.060 "is_configured": true, 00:11:15.060 "data_offset": 2048, 00:11:15.060 "data_size": 63488 00:11:15.060 }, 00:11:15.060 { 00:11:15.060 "name": "BaseBdev4", 00:11:15.060 "uuid": "08959f82-299b-5c54-94e9-7286f6928b9f", 00:11:15.060 "is_configured": true, 00:11:15.060 "data_offset": 2048, 00:11:15.060 "data_size": 63488 00:11:15.060 } 00:11:15.060 ] 00:11:15.060 }' 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.060 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.319 [2024-11-26 22:55:54.393281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.319 [2024-11-26 22:55:54.393324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.319 [2024-11-26 22:55:54.396055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.319 [2024-11-26 22:55:54.396134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.319 [2024-11-26 22:55:54.396268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.319 [2024-11-26 22:55:54.396293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:15.319 { 00:11:15.319 "results": [ 00:11:15.319 { 00:11:15.319 "job": "raid_bdev1", 00:11:15.319 "core_mask": "0x1", 00:11:15.319 "workload": "randrw", 00:11:15.319 "percentage": 50, 00:11:15.319 "status": "finished", 00:11:15.319 "queue_depth": 1, 00:11:15.319 "io_size": 131072, 00:11:15.319 "runtime": 1.363837, 00:11:15.319 "iops": 9201.246189977248, 00:11:15.319 "mibps": 1150.155773747156, 00:11:15.319 "io_failed": 0, 00:11:15.319 "io_timeout": 0, 00:11:15.319 "avg_latency_us": 105.93257090620179, 00:11:15.319 "min_latency_us": 22.871088642900723, 00:11:15.319 "max_latency_us": 1535.1520962259217 00:11:15.319 } 00:11:15.319 ], 00:11:15.319 "core_count": 1 00:11:15.319 } 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87511 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87511 ']' 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87511 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87511 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.319 killing process with pid 87511 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87511' 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87511 00:11:15.319 [2024-11-26 22:55:54.429927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.319 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87511 00:11:15.579 [2024-11-26 22:55:54.495131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sjJ9iftqt1 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:15.838 00:11:15.838 real 0m3.474s 00:11:15.838 user 0m4.232s 00:11:15.838 sys 0m0.635s 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.838 22:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 ************************************ 00:11:15.838 END TEST raid_write_error_test 00:11:15.838 ************************************ 00:11:15.838 22:55:54 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:15.838 22:55:54 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:15.838 22:55:54 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:15.838 22:55:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:15.838 22:55:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.838 22:55:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 ************************************ 00:11:15.838 START TEST raid_rebuild_test 00:11:15.838 ************************************ 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87638 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87638 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87638 ']' 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.838 22:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.096 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:16.096 Zero copy mechanism will not be used. 00:11:16.096 [2024-11-26 22:55:55.016544] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:16.096 [2024-11-26 22:55:55.016692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87638 ] 00:11:16.096 [2024-11-26 22:55:55.157333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:16.096 [2024-11-26 22:55:55.196416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.354 [2024-11-26 22:55:55.235679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.354 [2024-11-26 22:55:55.312546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.354 [2024-11-26 22:55:55.312591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 BaseBdev1_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 [2024-11-26 22:55:55.858351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:16.921 [2024-11-26 22:55:55.858436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.921 [2024-11-26 22:55:55.858471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:16.921 [2024-11-26 22:55:55.858490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.921 [2024-11-26 22:55:55.860943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.921 [2024-11-26 22:55:55.860987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.921 BaseBdev1 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 BaseBdev2_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 [2024-11-26 22:55:55.892902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:16.921 [2024-11-26 22:55:55.893026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.921 [2024-11-26 22:55:55.893053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:16.921 [2024-11-26 22:55:55.893067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.921 [2024-11-26 22:55:55.895479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.921 [2024-11-26 22:55:55.895521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:16.921 BaseBdev2 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 spare_malloc 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 spare_delay 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.921 [2024-11-26 22:55:55.939447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:16.921 [2024-11-26 22:55:55.939560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.921 [2024-11-26 22:55:55.939601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:16.921 [2024-11-26 22:55:55.939669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.921 [2024-11-26 22:55:55.942067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.921 [2024-11-26 22:55:55.942149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:16.921 spare 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.921 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.922 [2024-11-26 22:55:55.951508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.922 [2024-11-26 22:55:55.953691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.922 [2024-11-26 22:55:55.953830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:16.922 [2024-11-26 22:55:55.953881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:16.922 [2024-11-26 22:55:55.954198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:16.922 [2024-11-26 22:55:55.954419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:16.922 [2024-11-26 22:55:55.954472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:16.922 [2024-11-26 22:55:55.954649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.922 22:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.922 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.922 "name": "raid_bdev1", 00:11:16.922 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:16.922 "strip_size_kb": 0, 00:11:16.922 "state": "online", 00:11:16.922 "raid_level": "raid1", 00:11:16.922 "superblock": false, 00:11:16.922 "num_base_bdevs": 2, 00:11:16.922 "num_base_bdevs_discovered": 2, 00:11:16.922 "num_base_bdevs_operational": 2, 00:11:16.922 "base_bdevs_list": [ 00:11:16.922 { 00:11:16.922 "name": "BaseBdev1", 00:11:16.922 "uuid": "90770e17-b155-5004-b057-041a9ab300f7", 00:11:16.922 "is_configured": true, 00:11:16.922 "data_offset": 0, 00:11:16.922 "data_size": 65536 00:11:16.922 }, 00:11:16.922 { 00:11:16.922 "name": "BaseBdev2", 00:11:16.922 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:16.922 "is_configured": true, 00:11:16.922 "data_offset": 0, 00:11:16.922 "data_size": 65536 00:11:16.922 } 00:11:16.922 ] 00:11:16.922 }' 00:11:16.922 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.922 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.489 [2024-11-26 22:55:56.379913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.489 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:17.748 [2024-11-26 22:55:56.655736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:17.748 /dev/nbd0 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.748 1+0 records in 00:11:17.748 1+0 records out 00:11:17.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421203 s, 9.7 MB/s 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:17.748 22:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:21.954 65536+0 records in 00:11:21.954 65536+0 records out 00:11:21.955 33554432 bytes (34 MB, 32 MiB) copied, 4.22163 s, 7.9 MB/s 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.955 22:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.215 [2024-11-26 22:56:01.166085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.215 [2024-11-26 22:56:01.182947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.215 "name": "raid_bdev1", 00:11:22.215 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:22.215 "strip_size_kb": 0, 00:11:22.215 "state": "online", 00:11:22.215 "raid_level": "raid1", 00:11:22.215 "superblock": false, 00:11:22.215 "num_base_bdevs": 2, 00:11:22.215 "num_base_bdevs_discovered": 1, 00:11:22.215 "num_base_bdevs_operational": 1, 00:11:22.215 "base_bdevs_list": [ 00:11:22.215 { 00:11:22.215 "name": null, 00:11:22.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.215 "is_configured": false, 00:11:22.215 "data_offset": 0, 00:11:22.215 "data_size": 65536 00:11:22.215 }, 00:11:22.215 { 00:11:22.215 "name": "BaseBdev2", 00:11:22.215 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:22.215 "is_configured": true, 00:11:22.215 "data_offset": 0, 00:11:22.215 "data_size": 65536 00:11:22.215 } 00:11:22.215 ] 00:11:22.215 }' 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.215 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.783 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:22.783 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.783 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.783 [2024-11-26 22:56:01.623040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.783 [2024-11-26 22:56:01.632040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:11:22.783 22:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.783 22:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:22.783 [2024-11-26 22:56:01.634396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.723 "name": "raid_bdev1", 00:11:23.723 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:23.723 "strip_size_kb": 0, 00:11:23.723 "state": "online", 00:11:23.723 "raid_level": "raid1", 00:11:23.723 "superblock": false, 00:11:23.723 "num_base_bdevs": 2, 00:11:23.723 "num_base_bdevs_discovered": 2, 00:11:23.723 "num_base_bdevs_operational": 2, 00:11:23.723 "process": { 00:11:23.723 "type": "rebuild", 00:11:23.723 "target": "spare", 00:11:23.723 "progress": { 00:11:23.723 "blocks": 20480, 00:11:23.723 "percent": 31 00:11:23.723 } 00:11:23.723 }, 00:11:23.723 "base_bdevs_list": [ 00:11:23.723 { 00:11:23.723 "name": "spare", 00:11:23.723 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:23.723 "is_configured": true, 00:11:23.723 "data_offset": 0, 00:11:23.723 "data_size": 65536 00:11:23.723 }, 00:11:23.723 { 00:11:23.723 "name": "BaseBdev2", 00:11:23.723 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:23.723 "is_configured": true, 00:11:23.723 "data_offset": 0, 00:11:23.723 "data_size": 65536 00:11:23.723 } 00:11:23.723 ] 00:11:23.723 }' 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.723 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.723 [2024-11-26 22:56:02.793025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.723 [2024-11-26 22:56:02.845011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:23.723 [2024-11-26 22:56:02.845081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.723 [2024-11-26 22:56:02.845099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.723 [2024-11-26 22:56:02.845110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.983 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.983 "name": "raid_bdev1", 00:11:23.983 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:23.983 "strip_size_kb": 0, 00:11:23.983 "state": "online", 00:11:23.983 "raid_level": "raid1", 00:11:23.983 "superblock": false, 00:11:23.983 "num_base_bdevs": 2, 00:11:23.983 "num_base_bdevs_discovered": 1, 00:11:23.983 "num_base_bdevs_operational": 1, 00:11:23.983 "base_bdevs_list": [ 00:11:23.983 { 00:11:23.983 "name": null, 00:11:23.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.983 "is_configured": false, 00:11:23.984 "data_offset": 0, 00:11:23.984 "data_size": 65536 00:11:23.984 }, 00:11:23.984 { 00:11:23.984 "name": "BaseBdev2", 00:11:23.984 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:23.984 "is_configured": true, 00:11:23.984 "data_offset": 0, 00:11:23.984 "data_size": 65536 00:11:23.984 } 00:11:23.984 ] 00:11:23.984 }' 00:11:23.984 22:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.984 22:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.243 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.243 "name": "raid_bdev1", 00:11:24.243 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:24.243 "strip_size_kb": 0, 00:11:24.243 "state": "online", 00:11:24.243 "raid_level": "raid1", 00:11:24.243 "superblock": false, 00:11:24.243 "num_base_bdevs": 2, 00:11:24.243 "num_base_bdevs_discovered": 1, 00:11:24.243 "num_base_bdevs_operational": 1, 00:11:24.243 "base_bdevs_list": [ 00:11:24.243 { 00:11:24.243 "name": null, 00:11:24.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.243 "is_configured": false, 00:11:24.244 "data_offset": 0, 00:11:24.244 "data_size": 65536 00:11:24.244 }, 00:11:24.244 { 00:11:24.244 "name": "BaseBdev2", 00:11:24.244 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:24.244 "is_configured": true, 00:11:24.244 "data_offset": 0, 00:11:24.244 "data_size": 65536 00:11:24.244 } 00:11:24.244 ] 00:11:24.244 }' 00:11:24.244 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.503 [2024-11-26 22:56:03.441588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.503 [2024-11-26 22:56:03.450293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.503 22:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:24.503 [2024-11-26 22:56:03.452507] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.441 "name": "raid_bdev1", 00:11:25.441 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:25.441 "strip_size_kb": 0, 00:11:25.441 "state": "online", 00:11:25.441 "raid_level": "raid1", 00:11:25.441 "superblock": false, 00:11:25.441 "num_base_bdevs": 2, 00:11:25.441 "num_base_bdevs_discovered": 2, 00:11:25.441 "num_base_bdevs_operational": 2, 00:11:25.441 "process": { 00:11:25.441 "type": "rebuild", 00:11:25.441 "target": "spare", 00:11:25.441 "progress": { 00:11:25.441 "blocks": 20480, 00:11:25.441 "percent": 31 00:11:25.441 } 00:11:25.441 }, 00:11:25.441 "base_bdevs_list": [ 00:11:25.441 { 00:11:25.441 "name": "spare", 00:11:25.441 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:25.441 "is_configured": true, 00:11:25.441 "data_offset": 0, 00:11:25.441 "data_size": 65536 00:11:25.441 }, 00:11:25.441 { 00:11:25.441 "name": "BaseBdev2", 00:11:25.441 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:25.441 "is_configured": true, 00:11:25.441 "data_offset": 0, 00:11:25.441 "data_size": 65536 00:11:25.441 } 00:11:25.441 ] 00:11:25.441 }' 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.441 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=293 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.701 "name": "raid_bdev1", 00:11:25.701 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:25.701 "strip_size_kb": 0, 00:11:25.701 "state": "online", 00:11:25.701 "raid_level": "raid1", 00:11:25.701 "superblock": false, 00:11:25.701 "num_base_bdevs": 2, 00:11:25.701 "num_base_bdevs_discovered": 2, 00:11:25.701 "num_base_bdevs_operational": 2, 00:11:25.701 "process": { 00:11:25.701 "type": "rebuild", 00:11:25.701 "target": "spare", 00:11:25.701 "progress": { 00:11:25.701 "blocks": 22528, 00:11:25.701 "percent": 34 00:11:25.701 } 00:11:25.701 }, 00:11:25.701 "base_bdevs_list": [ 00:11:25.701 { 00:11:25.701 "name": "spare", 00:11:25.701 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 0, 00:11:25.701 "data_size": 65536 00:11:25.701 }, 00:11:25.701 { 00:11:25.701 "name": "BaseBdev2", 00:11:25.701 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 0, 00:11:25.701 "data_size": 65536 00:11:25.701 } 00:11:25.701 ] 00:11:25.701 }' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.701 22:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.639 22:56:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.899 "name": "raid_bdev1", 00:11:26.899 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:26.899 "strip_size_kb": 0, 00:11:26.899 "state": "online", 00:11:26.899 "raid_level": "raid1", 00:11:26.899 "superblock": false, 00:11:26.899 "num_base_bdevs": 2, 00:11:26.899 "num_base_bdevs_discovered": 2, 00:11:26.899 "num_base_bdevs_operational": 2, 00:11:26.899 "process": { 00:11:26.899 "type": "rebuild", 00:11:26.899 "target": "spare", 00:11:26.899 "progress": { 00:11:26.899 "blocks": 45056, 00:11:26.899 "percent": 68 00:11:26.899 } 00:11:26.899 }, 00:11:26.899 "base_bdevs_list": [ 00:11:26.899 { 00:11:26.899 "name": "spare", 00:11:26.899 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:26.899 "is_configured": true, 00:11:26.899 "data_offset": 0, 00:11:26.899 "data_size": 65536 00:11:26.899 }, 00:11:26.899 { 00:11:26.899 "name": "BaseBdev2", 00:11:26.899 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:26.899 "is_configured": true, 00:11:26.899 "data_offset": 0, 00:11:26.899 "data_size": 65536 00:11:26.899 } 00:11:26.899 ] 00:11:26.899 }' 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.899 22:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.837 [2024-11-26 22:56:06.679238] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:27.837 [2024-11-26 22:56:06.679344] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:27.837 [2024-11-26 22:56:06.679406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.837 "name": "raid_bdev1", 00:11:27.837 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:27.837 "strip_size_kb": 0, 00:11:27.837 "state": "online", 00:11:27.837 "raid_level": "raid1", 00:11:27.837 "superblock": false, 00:11:27.837 "num_base_bdevs": 2, 00:11:27.837 "num_base_bdevs_discovered": 2, 00:11:27.837 "num_base_bdevs_operational": 2, 00:11:27.837 "base_bdevs_list": [ 00:11:27.837 { 00:11:27.837 "name": "spare", 00:11:27.837 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:27.837 "is_configured": true, 00:11:27.837 "data_offset": 0, 00:11:27.837 "data_size": 65536 00:11:27.837 }, 00:11:27.837 { 00:11:27.837 "name": "BaseBdev2", 00:11:27.837 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:27.837 "is_configured": true, 00:11:27.837 "data_offset": 0, 00:11:27.837 "data_size": 65536 00:11:27.837 } 00:11:27.837 ] 00:11:27.837 }' 00:11:27.837 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.097 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:28.097 22:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.097 "name": "raid_bdev1", 00:11:28.097 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:28.097 "strip_size_kb": 0, 00:11:28.097 "state": "online", 00:11:28.097 "raid_level": "raid1", 00:11:28.097 "superblock": false, 00:11:28.097 "num_base_bdevs": 2, 00:11:28.097 "num_base_bdevs_discovered": 2, 00:11:28.097 "num_base_bdevs_operational": 2, 00:11:28.097 "base_bdevs_list": [ 00:11:28.097 { 00:11:28.097 "name": "spare", 00:11:28.097 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:28.097 "is_configured": true, 00:11:28.097 "data_offset": 0, 00:11:28.097 "data_size": 65536 00:11:28.097 }, 00:11:28.097 { 00:11:28.097 "name": "BaseBdev2", 00:11:28.097 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:28.097 "is_configured": true, 00:11:28.097 "data_offset": 0, 00:11:28.097 "data_size": 65536 00:11:28.097 } 00:11:28.097 ] 00:11:28.097 }' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.097 "name": "raid_bdev1", 00:11:28.097 "uuid": "0117d4ab-9af9-4e48-988f-2127fc261f1c", 00:11:28.097 "strip_size_kb": 0, 00:11:28.097 "state": "online", 00:11:28.097 "raid_level": "raid1", 00:11:28.097 "superblock": false, 00:11:28.097 "num_base_bdevs": 2, 00:11:28.097 "num_base_bdevs_discovered": 2, 00:11:28.097 "num_base_bdevs_operational": 2, 00:11:28.097 "base_bdevs_list": [ 00:11:28.097 { 00:11:28.097 "name": "spare", 00:11:28.097 "uuid": "e8a05a74-bed7-5b18-b503-5d91677b6762", 00:11:28.097 "is_configured": true, 00:11:28.097 "data_offset": 0, 00:11:28.097 "data_size": 65536 00:11:28.097 }, 00:11:28.097 { 00:11:28.097 "name": "BaseBdev2", 00:11:28.097 "uuid": "44736d3a-4685-5016-8966-1b4d69cd59b1", 00:11:28.097 "is_configured": true, 00:11:28.097 "data_offset": 0, 00:11:28.097 "data_size": 65536 00:11:28.097 } 00:11:28.097 ] 00:11:28.097 }' 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.097 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.668 [2024-11-26 22:56:07.563491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.668 [2024-11-26 22:56:07.563538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.668 [2024-11-26 22:56:07.563663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.668 [2024-11-26 22:56:07.563771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.668 [2024-11-26 22:56:07.563782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.668 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:28.927 /dev/nbd0 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.927 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.928 1+0 records in 00:11:28.928 1+0 records out 00:11:28.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352401 s, 11.6 MB/s 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.928 22:56:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:29.187 /dev/nbd1 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.187 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.187 1+0 records in 00:11:29.187 1+0 records out 00:11:29.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591113 s, 6.9 MB/s 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.188 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.448 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87638 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87638 ']' 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87638 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87638 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87638' 00:11:29.707 killing process with pid 87638 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87638 00:11:29.707 Received shutdown signal, test time was about 60.000000 seconds 00:11:29.707 00:11:29.707 Latency(us) 00:11:29.707 [2024-11-26T22:56:08.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.707 [2024-11-26T22:56:08.835Z] =================================================================================================================== 00:11:29.707 [2024-11-26T22:56:08.835Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:29.707 [2024-11-26 22:56:08.698424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.707 22:56:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87638 00:11:29.707 [2024-11-26 22:56:08.756760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.967 22:56:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:29.967 00:11:29.967 real 0m14.172s 00:11:29.967 user 0m16.037s 00:11:29.967 sys 0m3.194s 00:11:29.967 ************************************ 00:11:29.967 END TEST raid_rebuild_test 00:11:29.967 ************************************ 00:11:29.967 22:56:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.967 22:56:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.231 22:56:09 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:30.231 22:56:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:30.231 22:56:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.231 22:56:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.231 ************************************ 00:11:30.231 START TEST raid_rebuild_test_sb 00:11:30.231 ************************************ 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88045 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88045 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88045 ']' 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.231 22:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.231 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:30.231 Zero copy mechanism will not be used. 00:11:30.231 [2024-11-26 22:56:09.263673] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:30.231 [2024-11-26 22:56:09.263844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88045 ] 00:11:30.490 [2024-11-26 22:56:09.399525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:30.490 [2024-11-26 22:56:09.439921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.490 [2024-11-26 22:56:09.478496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.490 [2024-11-26 22:56:09.554346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.490 [2024-11-26 22:56:09.554514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 BaseBdev1_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 [2024-11-26 22:56:10.109061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:31.059 [2024-11-26 22:56:10.109209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.059 [2024-11-26 22:56:10.109280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:31.059 [2024-11-26 22:56:10.109348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.059 [2024-11-26 22:56:10.111899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.059 [2024-11-26 22:56:10.111998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.059 BaseBdev1 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 BaseBdev2_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 [2024-11-26 22:56:10.143689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:31.059 [2024-11-26 22:56:10.143749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.059 [2024-11-26 22:56:10.143771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:31.059 [2024-11-26 22:56:10.143784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.059 [2024-11-26 22:56:10.146239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.059 [2024-11-26 22:56:10.146290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.059 BaseBdev2 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 spare_malloc 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.059 spare_delay 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.059 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.319 [2024-11-26 22:56:10.190141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:31.319 [2024-11-26 22:56:10.190206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.319 [2024-11-26 22:56:10.190226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:31.319 [2024-11-26 22:56:10.190242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.320 [2024-11-26 22:56:10.192726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.320 [2024-11-26 22:56:10.192769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:31.320 spare 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.320 [2024-11-26 22:56:10.202225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.320 [2024-11-26 22:56:10.204423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.320 [2024-11-26 22:56:10.204581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:31.320 [2024-11-26 22:56:10.204597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.320 [2024-11-26 22:56:10.204850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:31.320 [2024-11-26 22:56:10.205008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:31.320 [2024-11-26 22:56:10.205019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:31.320 [2024-11-26 22:56:10.205136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.320 "name": "raid_bdev1", 00:11:31.320 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:31.320 "strip_size_kb": 0, 00:11:31.320 "state": "online", 00:11:31.320 "raid_level": "raid1", 00:11:31.320 "superblock": true, 00:11:31.320 "num_base_bdevs": 2, 00:11:31.320 "num_base_bdevs_discovered": 2, 00:11:31.320 "num_base_bdevs_operational": 2, 00:11:31.320 "base_bdevs_list": [ 00:11:31.320 { 00:11:31.320 "name": "BaseBdev1", 00:11:31.320 "uuid": "44c38a40-978c-5791-a45b-fff5b47a34e6", 00:11:31.320 "is_configured": true, 00:11:31.320 "data_offset": 2048, 00:11:31.320 "data_size": 63488 00:11:31.320 }, 00:11:31.320 { 00:11:31.320 "name": "BaseBdev2", 00:11:31.320 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:31.320 "is_configured": true, 00:11:31.320 "data_offset": 2048, 00:11:31.320 "data_size": 63488 00:11:31.320 } 00:11:31.320 ] 00:11:31.320 }' 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.320 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.582 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.582 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.582 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.582 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:31.582 [2024-11-26 22:56:10.658617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.582 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.850 22:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:32.119 [2024-11-26 22:56:10.986502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:32.119 /dev/nbd0 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:32.119 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.120 1+0 records in 00:11:32.120 1+0 records out 00:11:32.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515148 s, 8.0 MB/s 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:32.120 22:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:37.398 63488+0 records in 00:11:37.398 63488+0 records out 00:11:37.398 32505856 bytes (33 MB, 31 MiB) copied, 4.40317 s, 7.4 MB/s 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:37.398 [2024-11-26 22:56:15.681069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.398 [2024-11-26 22:56:15.693129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.398 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.398 "name": "raid_bdev1", 00:11:37.398 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:37.398 "strip_size_kb": 0, 00:11:37.398 "state": "online", 00:11:37.398 "raid_level": "raid1", 00:11:37.398 "superblock": true, 00:11:37.398 "num_base_bdevs": 2, 00:11:37.398 "num_base_bdevs_discovered": 1, 00:11:37.398 "num_base_bdevs_operational": 1, 00:11:37.398 "base_bdevs_list": [ 00:11:37.399 { 00:11:37.399 "name": null, 00:11:37.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.399 "is_configured": false, 00:11:37.399 "data_offset": 0, 00:11:37.399 "data_size": 63488 00:11:37.399 }, 00:11:37.399 { 00:11:37.399 "name": "BaseBdev2", 00:11:37.399 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:37.399 "is_configured": true, 00:11:37.399 "data_offset": 2048, 00:11:37.399 "data_size": 63488 00:11:37.399 } 00:11:37.399 ] 00:11:37.399 }' 00:11:37.399 22:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.399 22:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.399 22:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.399 22:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.399 22:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.399 [2024-11-26 22:56:16.141271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.399 [2024-11-26 22:56:16.150480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:11:37.399 22:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.399 22:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:37.399 [2024-11-26 22:56:16.152770] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.337 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.337 "name": "raid_bdev1", 00:11:38.337 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:38.337 "strip_size_kb": 0, 00:11:38.337 "state": "online", 00:11:38.337 "raid_level": "raid1", 00:11:38.337 "superblock": true, 00:11:38.337 "num_base_bdevs": 2, 00:11:38.337 "num_base_bdevs_discovered": 2, 00:11:38.337 "num_base_bdevs_operational": 2, 00:11:38.337 "process": { 00:11:38.337 "type": "rebuild", 00:11:38.337 "target": "spare", 00:11:38.337 "progress": { 00:11:38.338 "blocks": 20480, 00:11:38.338 "percent": 32 00:11:38.338 } 00:11:38.338 }, 00:11:38.338 "base_bdevs_list": [ 00:11:38.338 { 00:11:38.338 "name": "spare", 00:11:38.338 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:38.338 "is_configured": true, 00:11:38.338 "data_offset": 2048, 00:11:38.338 "data_size": 63488 00:11:38.338 }, 00:11:38.338 { 00:11:38.338 "name": "BaseBdev2", 00:11:38.338 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:38.338 "is_configured": true, 00:11:38.338 "data_offset": 2048, 00:11:38.338 "data_size": 63488 00:11:38.338 } 00:11:38.338 ] 00:11:38.338 }' 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.338 [2024-11-26 22:56:17.294797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.338 [2024-11-26 22:56:17.363564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:38.338 [2024-11-26 22:56:17.363646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.338 [2024-11-26 22:56:17.363663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.338 [2024-11-26 22:56:17.363675] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.338 "name": "raid_bdev1", 00:11:38.338 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:38.338 "strip_size_kb": 0, 00:11:38.338 "state": "online", 00:11:38.338 "raid_level": "raid1", 00:11:38.338 "superblock": true, 00:11:38.338 "num_base_bdevs": 2, 00:11:38.338 "num_base_bdevs_discovered": 1, 00:11:38.338 "num_base_bdevs_operational": 1, 00:11:38.338 "base_bdevs_list": [ 00:11:38.338 { 00:11:38.338 "name": null, 00:11:38.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.338 "is_configured": false, 00:11:38.338 "data_offset": 0, 00:11:38.338 "data_size": 63488 00:11:38.338 }, 00:11:38.338 { 00:11:38.338 "name": "BaseBdev2", 00:11:38.338 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:38.338 "is_configured": true, 00:11:38.338 "data_offset": 2048, 00:11:38.338 "data_size": 63488 00:11:38.338 } 00:11:38.338 ] 00:11:38.338 }' 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.338 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.907 "name": "raid_bdev1", 00:11:38.907 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:38.907 "strip_size_kb": 0, 00:11:38.907 "state": "online", 00:11:38.907 "raid_level": "raid1", 00:11:38.907 "superblock": true, 00:11:38.907 "num_base_bdevs": 2, 00:11:38.907 "num_base_bdevs_discovered": 1, 00:11:38.907 "num_base_bdevs_operational": 1, 00:11:38.907 "base_bdevs_list": [ 00:11:38.907 { 00:11:38.907 "name": null, 00:11:38.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.907 "is_configured": false, 00:11:38.907 "data_offset": 0, 00:11:38.907 "data_size": 63488 00:11:38.907 }, 00:11:38.907 { 00:11:38.907 "name": "BaseBdev2", 00:11:38.907 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:38.907 "is_configured": true, 00:11:38.907 "data_offset": 2048, 00:11:38.907 "data_size": 63488 00:11:38.907 } 00:11:38.907 ] 00:11:38.907 }' 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.907 22:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.907 22:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.907 22:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.907 22:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.907 22:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.907 [2024-11-26 22:56:18.028231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:39.166 [2024-11-26 22:56:18.036907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:11:39.166 22:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.166 22:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:39.166 [2024-11-26 22:56:18.039173] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.104 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.105 "name": "raid_bdev1", 00:11:40.105 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:40.105 "strip_size_kb": 0, 00:11:40.105 "state": "online", 00:11:40.105 "raid_level": "raid1", 00:11:40.105 "superblock": true, 00:11:40.105 "num_base_bdevs": 2, 00:11:40.105 "num_base_bdevs_discovered": 2, 00:11:40.105 "num_base_bdevs_operational": 2, 00:11:40.105 "process": { 00:11:40.105 "type": "rebuild", 00:11:40.105 "target": "spare", 00:11:40.105 "progress": { 00:11:40.105 "blocks": 20480, 00:11:40.105 "percent": 32 00:11:40.105 } 00:11:40.105 }, 00:11:40.105 "base_bdevs_list": [ 00:11:40.105 { 00:11:40.105 "name": "spare", 00:11:40.105 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:40.105 "is_configured": true, 00:11:40.105 "data_offset": 2048, 00:11:40.105 "data_size": 63488 00:11:40.105 }, 00:11:40.105 { 00:11:40.105 "name": "BaseBdev2", 00:11:40.105 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:40.105 "is_configured": true, 00:11:40.105 "data_offset": 2048, 00:11:40.105 "data_size": 63488 00:11:40.105 } 00:11:40.105 ] 00:11:40.105 }' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:40.105 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=308 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.105 22:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.363 "name": "raid_bdev1", 00:11:40.363 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:40.363 "strip_size_kb": 0, 00:11:40.363 "state": "online", 00:11:40.363 "raid_level": "raid1", 00:11:40.363 "superblock": true, 00:11:40.363 "num_base_bdevs": 2, 00:11:40.363 "num_base_bdevs_discovered": 2, 00:11:40.363 "num_base_bdevs_operational": 2, 00:11:40.363 "process": { 00:11:40.363 "type": "rebuild", 00:11:40.363 "target": "spare", 00:11:40.363 "progress": { 00:11:40.363 "blocks": 22528, 00:11:40.363 "percent": 35 00:11:40.363 } 00:11:40.363 }, 00:11:40.363 "base_bdevs_list": [ 00:11:40.363 { 00:11:40.363 "name": "spare", 00:11:40.363 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:40.363 "is_configured": true, 00:11:40.363 "data_offset": 2048, 00:11:40.363 "data_size": 63488 00:11:40.363 }, 00:11:40.363 { 00:11:40.363 "name": "BaseBdev2", 00:11:40.363 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:40.363 "is_configured": true, 00:11:40.363 "data_offset": 2048, 00:11:40.363 "data_size": 63488 00:11:40.363 } 00:11:40.363 ] 00:11:40.363 }' 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.363 22:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.304 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.304 "name": "raid_bdev1", 00:11:41.304 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:41.304 "strip_size_kb": 0, 00:11:41.304 "state": "online", 00:11:41.304 "raid_level": "raid1", 00:11:41.304 "superblock": true, 00:11:41.304 "num_base_bdevs": 2, 00:11:41.304 "num_base_bdevs_discovered": 2, 00:11:41.304 "num_base_bdevs_operational": 2, 00:11:41.304 "process": { 00:11:41.304 "type": "rebuild", 00:11:41.304 "target": "spare", 00:11:41.304 "progress": { 00:11:41.304 "blocks": 45056, 00:11:41.304 "percent": 70 00:11:41.304 } 00:11:41.304 }, 00:11:41.304 "base_bdevs_list": [ 00:11:41.304 { 00:11:41.304 "name": "spare", 00:11:41.304 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:41.304 "is_configured": true, 00:11:41.305 "data_offset": 2048, 00:11:41.305 "data_size": 63488 00:11:41.305 }, 00:11:41.305 { 00:11:41.305 "name": "BaseBdev2", 00:11:41.305 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:41.305 "is_configured": true, 00:11:41.305 "data_offset": 2048, 00:11:41.305 "data_size": 63488 00:11:41.305 } 00:11:41.305 ] 00:11:41.305 }' 00:11:41.305 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.305 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.305 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.564 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.564 22:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:42.136 [2024-11-26 22:56:21.165095] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:42.136 [2024-11-26 22:56:21.165170] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:42.136 [2024-11-26 22:56:21.165318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.396 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.657 "name": "raid_bdev1", 00:11:42.657 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:42.657 "strip_size_kb": 0, 00:11:42.657 "state": "online", 00:11:42.657 "raid_level": "raid1", 00:11:42.657 "superblock": true, 00:11:42.657 "num_base_bdevs": 2, 00:11:42.657 "num_base_bdevs_discovered": 2, 00:11:42.657 "num_base_bdevs_operational": 2, 00:11:42.657 "base_bdevs_list": [ 00:11:42.657 { 00:11:42.657 "name": "spare", 00:11:42.657 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:42.657 "is_configured": true, 00:11:42.657 "data_offset": 2048, 00:11:42.657 "data_size": 63488 00:11:42.657 }, 00:11:42.657 { 00:11:42.657 "name": "BaseBdev2", 00:11:42.657 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:42.657 "is_configured": true, 00:11:42.657 "data_offset": 2048, 00:11:42.657 "data_size": 63488 00:11:42.657 } 00:11:42.657 ] 00:11:42.657 }' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.657 "name": "raid_bdev1", 00:11:42.657 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:42.657 "strip_size_kb": 0, 00:11:42.657 "state": "online", 00:11:42.657 "raid_level": "raid1", 00:11:42.657 "superblock": true, 00:11:42.657 "num_base_bdevs": 2, 00:11:42.657 "num_base_bdevs_discovered": 2, 00:11:42.657 "num_base_bdevs_operational": 2, 00:11:42.657 "base_bdevs_list": [ 00:11:42.657 { 00:11:42.657 "name": "spare", 00:11:42.657 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:42.657 "is_configured": true, 00:11:42.657 "data_offset": 2048, 00:11:42.657 "data_size": 63488 00:11:42.657 }, 00:11:42.657 { 00:11:42.657 "name": "BaseBdev2", 00:11:42.657 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:42.657 "is_configured": true, 00:11:42.657 "data_offset": 2048, 00:11:42.657 "data_size": 63488 00:11:42.657 } 00:11:42.657 ] 00:11:42.657 }' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.657 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.917 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.917 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.917 "name": "raid_bdev1", 00:11:42.917 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:42.917 "strip_size_kb": 0, 00:11:42.917 "state": "online", 00:11:42.917 "raid_level": "raid1", 00:11:42.917 "superblock": true, 00:11:42.918 "num_base_bdevs": 2, 00:11:42.918 "num_base_bdevs_discovered": 2, 00:11:42.918 "num_base_bdevs_operational": 2, 00:11:42.918 "base_bdevs_list": [ 00:11:42.918 { 00:11:42.918 "name": "spare", 00:11:42.918 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:42.918 "is_configured": true, 00:11:42.918 "data_offset": 2048, 00:11:42.918 "data_size": 63488 00:11:42.918 }, 00:11:42.918 { 00:11:42.918 "name": "BaseBdev2", 00:11:42.918 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:42.918 "is_configured": true, 00:11:42.918 "data_offset": 2048, 00:11:42.918 "data_size": 63488 00:11:42.918 } 00:11:42.918 ] 00:11:42.918 }' 00:11:42.918 22:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.918 22:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.178 [2024-11-26 22:56:22.225463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.178 [2024-11-26 22:56:22.225553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.178 [2024-11-26 22:56:22.225713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.178 [2024-11-26 22:56:22.225826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.178 [2024-11-26 22:56:22.225889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.178 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:43.439 /dev/nbd0 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.439 1+0 records in 00:11:43.439 1+0 records out 00:11:43.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423628 s, 9.7 MB/s 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.439 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:43.699 /dev/nbd1 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.699 1+0 records in 00:11:43.699 1+0 records out 00:11:43.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458057 s, 8.9 MB/s 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.699 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.960 22:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.220 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.221 [2024-11-26 22:56:23.306734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:44.221 [2024-11-26 22:56:23.306807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.221 [2024-11-26 22:56:23.306857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.221 [2024-11-26 22:56:23.306870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.221 [2024-11-26 22:56:23.309511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.221 [2024-11-26 22:56:23.309605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:44.221 [2024-11-26 22:56:23.309727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:44.221 [2024-11-26 22:56:23.309787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.221 [2024-11-26 22:56:23.309939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.221 spare 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.221 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.480 [2024-11-26 22:56:23.410032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:44.480 [2024-11-26 22:56:23.410113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.480 [2024-11-26 22:56:23.410490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:11:44.480 [2024-11-26 22:56:23.410691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:44.480 [2024-11-26 22:56:23.410703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:44.480 [2024-11-26 22:56:23.410869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.480 "name": "raid_bdev1", 00:11:44.480 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:44.480 "strip_size_kb": 0, 00:11:44.480 "state": "online", 00:11:44.480 "raid_level": "raid1", 00:11:44.480 "superblock": true, 00:11:44.480 "num_base_bdevs": 2, 00:11:44.480 "num_base_bdevs_discovered": 2, 00:11:44.480 "num_base_bdevs_operational": 2, 00:11:44.480 "base_bdevs_list": [ 00:11:44.480 { 00:11:44.480 "name": "spare", 00:11:44.480 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:44.480 "is_configured": true, 00:11:44.480 "data_offset": 2048, 00:11:44.480 "data_size": 63488 00:11:44.480 }, 00:11:44.480 { 00:11:44.480 "name": "BaseBdev2", 00:11:44.480 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:44.480 "is_configured": true, 00:11:44.480 "data_offset": 2048, 00:11:44.480 "data_size": 63488 00:11:44.480 } 00:11:44.480 ] 00:11:44.480 }' 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.480 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.740 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.740 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.740 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.740 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.740 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.000 "name": "raid_bdev1", 00:11:45.000 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:45.000 "strip_size_kb": 0, 00:11:45.000 "state": "online", 00:11:45.000 "raid_level": "raid1", 00:11:45.000 "superblock": true, 00:11:45.000 "num_base_bdevs": 2, 00:11:45.000 "num_base_bdevs_discovered": 2, 00:11:45.000 "num_base_bdevs_operational": 2, 00:11:45.000 "base_bdevs_list": [ 00:11:45.000 { 00:11:45.000 "name": "spare", 00:11:45.000 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:45.000 "is_configured": true, 00:11:45.000 "data_offset": 2048, 00:11:45.000 "data_size": 63488 00:11:45.000 }, 00:11:45.000 { 00:11:45.000 "name": "BaseBdev2", 00:11:45.000 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:45.000 "is_configured": true, 00:11:45.000 "data_offset": 2048, 00:11:45.000 "data_size": 63488 00:11:45.000 } 00:11:45.000 ] 00:11:45.000 }' 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.000 22:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 [2024-11-26 22:56:24.055064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:45.000 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.001 "name": "raid_bdev1", 00:11:45.001 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:45.001 "strip_size_kb": 0, 00:11:45.001 "state": "online", 00:11:45.001 "raid_level": "raid1", 00:11:45.001 "superblock": true, 00:11:45.001 "num_base_bdevs": 2, 00:11:45.001 "num_base_bdevs_discovered": 1, 00:11:45.001 "num_base_bdevs_operational": 1, 00:11:45.001 "base_bdevs_list": [ 00:11:45.001 { 00:11:45.001 "name": null, 00:11:45.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.001 "is_configured": false, 00:11:45.001 "data_offset": 0, 00:11:45.001 "data_size": 63488 00:11:45.001 }, 00:11:45.001 { 00:11:45.001 "name": "BaseBdev2", 00:11:45.001 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:45.001 "is_configured": true, 00:11:45.001 "data_offset": 2048, 00:11:45.001 "data_size": 63488 00:11:45.001 } 00:11:45.001 ] 00:11:45.001 }' 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.001 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.571 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:45.571 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.571 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.571 [2024-11-26 22:56:24.519220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.571 [2024-11-26 22:56:24.519575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:45.571 [2024-11-26 22:56:24.519656] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:45.571 [2024-11-26 22:56:24.519762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.571 [2024-11-26 22:56:24.528771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:11:45.571 22:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.571 22:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:45.571 [2024-11-26 22:56:24.531095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.554 "name": "raid_bdev1", 00:11:46.554 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:46.554 "strip_size_kb": 0, 00:11:46.554 "state": "online", 00:11:46.554 "raid_level": "raid1", 00:11:46.554 "superblock": true, 00:11:46.554 "num_base_bdevs": 2, 00:11:46.554 "num_base_bdevs_discovered": 2, 00:11:46.554 "num_base_bdevs_operational": 2, 00:11:46.554 "process": { 00:11:46.554 "type": "rebuild", 00:11:46.554 "target": "spare", 00:11:46.554 "progress": { 00:11:46.554 "blocks": 20480, 00:11:46.554 "percent": 32 00:11:46.554 } 00:11:46.554 }, 00:11:46.554 "base_bdevs_list": [ 00:11:46.554 { 00:11:46.554 "name": "spare", 00:11:46.554 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:46.554 "is_configured": true, 00:11:46.554 "data_offset": 2048, 00:11:46.554 "data_size": 63488 00:11:46.554 }, 00:11:46.554 { 00:11:46.554 "name": "BaseBdev2", 00:11:46.554 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:46.554 "is_configured": true, 00:11:46.554 "data_offset": 2048, 00:11:46.554 "data_size": 63488 00:11:46.554 } 00:11:46.554 ] 00:11:46.554 }' 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.554 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.814 [2024-11-26 22:56:25.689636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:46.814 [2024-11-26 22:56:25.741281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:46.814 [2024-11-26 22:56:25.741347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.814 [2024-11-26 22:56:25.741366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:46.814 [2024-11-26 22:56:25.741379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.814 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.814 "name": "raid_bdev1", 00:11:46.814 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:46.814 "strip_size_kb": 0, 00:11:46.814 "state": "online", 00:11:46.814 "raid_level": "raid1", 00:11:46.814 "superblock": true, 00:11:46.814 "num_base_bdevs": 2, 00:11:46.814 "num_base_bdevs_discovered": 1, 00:11:46.814 "num_base_bdevs_operational": 1, 00:11:46.814 "base_bdevs_list": [ 00:11:46.814 { 00:11:46.814 "name": null, 00:11:46.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.814 "is_configured": false, 00:11:46.814 "data_offset": 0, 00:11:46.814 "data_size": 63488 00:11:46.815 }, 00:11:46.815 { 00:11:46.815 "name": "BaseBdev2", 00:11:46.815 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:46.815 "is_configured": true, 00:11:46.815 "data_offset": 2048, 00:11:46.815 "data_size": 63488 00:11:46.815 } 00:11:46.815 ] 00:11:46.815 }' 00:11:46.815 22:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.815 22:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.075 22:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:47.075 22:56:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.075 22:56:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.075 [2024-11-26 22:56:26.190184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:47.075 [2024-11-26 22:56:26.190352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.075 [2024-11-26 22:56:26.190405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.075 [2024-11-26 22:56:26.190444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.075 [2024-11-26 22:56:26.191034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.075 [2024-11-26 22:56:26.191115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:47.075 [2024-11-26 22:56:26.191289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:47.075 [2024-11-26 22:56:26.191347] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:47.075 [2024-11-26 22:56:26.191400] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:47.075 [2024-11-26 22:56:26.191458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.075 [2024-11-26 22:56:26.200373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:11:47.075 spare 00:11:47.335 22:56:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.335 22:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:47.335 [2024-11-26 22:56:26.202659] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.274 "name": "raid_bdev1", 00:11:48.274 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:48.274 "strip_size_kb": 0, 00:11:48.274 "state": "online", 00:11:48.274 "raid_level": "raid1", 00:11:48.274 "superblock": true, 00:11:48.274 "num_base_bdevs": 2, 00:11:48.274 "num_base_bdevs_discovered": 2, 00:11:48.274 "num_base_bdevs_operational": 2, 00:11:48.274 "process": { 00:11:48.274 "type": "rebuild", 00:11:48.274 "target": "spare", 00:11:48.274 "progress": { 00:11:48.274 "blocks": 20480, 00:11:48.274 "percent": 32 00:11:48.274 } 00:11:48.274 }, 00:11:48.274 "base_bdevs_list": [ 00:11:48.274 { 00:11:48.274 "name": "spare", 00:11:48.274 "uuid": "a0f4526d-0f95-52d6-b0dd-bcf90cde2bd7", 00:11:48.274 "is_configured": true, 00:11:48.274 "data_offset": 2048, 00:11:48.274 "data_size": 63488 00:11:48.274 }, 00:11:48.274 { 00:11:48.274 "name": "BaseBdev2", 00:11:48.274 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:48.274 "is_configured": true, 00:11:48.274 "data_offset": 2048, 00:11:48.274 "data_size": 63488 00:11:48.274 } 00:11:48.274 ] 00:11:48.274 }' 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.274 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 [2024-11-26 22:56:27.373045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.534 [2024-11-26 22:56:27.412677] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:48.534 [2024-11-26 22:56:27.412743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.534 [2024-11-26 22:56:27.412764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.534 [2024-11-26 22:56:27.412773] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.534 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.534 "name": "raid_bdev1", 00:11:48.534 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:48.534 "strip_size_kb": 0, 00:11:48.534 "state": "online", 00:11:48.534 "raid_level": "raid1", 00:11:48.534 "superblock": true, 00:11:48.534 "num_base_bdevs": 2, 00:11:48.534 "num_base_bdevs_discovered": 1, 00:11:48.534 "num_base_bdevs_operational": 1, 00:11:48.534 "base_bdevs_list": [ 00:11:48.534 { 00:11:48.534 "name": null, 00:11:48.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.534 "is_configured": false, 00:11:48.534 "data_offset": 0, 00:11:48.534 "data_size": 63488 00:11:48.534 }, 00:11:48.534 { 00:11:48.534 "name": "BaseBdev2", 00:11:48.534 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:48.535 "is_configured": true, 00:11:48.535 "data_offset": 2048, 00:11:48.535 "data_size": 63488 00:11:48.535 } 00:11:48.535 ] 00:11:48.535 }' 00:11:48.535 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.535 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.795 "name": "raid_bdev1", 00:11:48.795 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:48.795 "strip_size_kb": 0, 00:11:48.795 "state": "online", 00:11:48.795 "raid_level": "raid1", 00:11:48.795 "superblock": true, 00:11:48.795 "num_base_bdevs": 2, 00:11:48.795 "num_base_bdevs_discovered": 1, 00:11:48.795 "num_base_bdevs_operational": 1, 00:11:48.795 "base_bdevs_list": [ 00:11:48.795 { 00:11:48.795 "name": null, 00:11:48.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.795 "is_configured": false, 00:11:48.795 "data_offset": 0, 00:11:48.795 "data_size": 63488 00:11:48.795 }, 00:11:48.795 { 00:11:48.795 "name": "BaseBdev2", 00:11:48.795 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:48.795 "is_configured": true, 00:11:48.795 "data_offset": 2048, 00:11:48.795 "data_size": 63488 00:11:48.795 } 00:11:48.795 ] 00:11:48.795 }' 00:11:48.795 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.054 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.054 22:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.054 [2024-11-26 22:56:28.029259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:49.054 [2024-11-26 22:56:28.029334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.054 [2024-11-26 22:56:28.029363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:49.054 [2024-11-26 22:56:28.029375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.054 [2024-11-26 22:56:28.029884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.054 [2024-11-26 22:56:28.029904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:49.054 [2024-11-26 22:56:28.029999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:49.054 [2024-11-26 22:56:28.030015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:49.054 [2024-11-26 22:56:28.030027] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:49.054 [2024-11-26 22:56:28.030043] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:49.054 BaseBdev1 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.054 22:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.992 "name": "raid_bdev1", 00:11:49.992 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:49.992 "strip_size_kb": 0, 00:11:49.992 "state": "online", 00:11:49.992 "raid_level": "raid1", 00:11:49.992 "superblock": true, 00:11:49.992 "num_base_bdevs": 2, 00:11:49.992 "num_base_bdevs_discovered": 1, 00:11:49.992 "num_base_bdevs_operational": 1, 00:11:49.992 "base_bdevs_list": [ 00:11:49.992 { 00:11:49.992 "name": null, 00:11:49.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.992 "is_configured": false, 00:11:49.992 "data_offset": 0, 00:11:49.992 "data_size": 63488 00:11:49.992 }, 00:11:49.992 { 00:11:49.992 "name": "BaseBdev2", 00:11:49.992 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:49.992 "is_configured": true, 00:11:49.992 "data_offset": 2048, 00:11:49.992 "data_size": 63488 00:11:49.992 } 00:11:49.992 ] 00:11:49.992 }' 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.992 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.567 "name": "raid_bdev1", 00:11:50.567 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:50.567 "strip_size_kb": 0, 00:11:50.567 "state": "online", 00:11:50.567 "raid_level": "raid1", 00:11:50.567 "superblock": true, 00:11:50.567 "num_base_bdevs": 2, 00:11:50.567 "num_base_bdevs_discovered": 1, 00:11:50.567 "num_base_bdevs_operational": 1, 00:11:50.567 "base_bdevs_list": [ 00:11:50.567 { 00:11:50.567 "name": null, 00:11:50.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.567 "is_configured": false, 00:11:50.567 "data_offset": 0, 00:11:50.567 "data_size": 63488 00:11:50.567 }, 00:11:50.567 { 00:11:50.567 "name": "BaseBdev2", 00:11:50.567 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:50.567 "is_configured": true, 00:11:50.567 "data_offset": 2048, 00:11:50.567 "data_size": 63488 00:11:50.567 } 00:11:50.567 ] 00:11:50.567 }' 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.567 [2024-11-26 22:56:29.569765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.567 [2024-11-26 22:56:29.570049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:50.567 [2024-11-26 22:56:29.570083] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:50.567 request: 00:11:50.567 { 00:11:50.567 "base_bdev": "BaseBdev1", 00:11:50.567 "raid_bdev": "raid_bdev1", 00:11:50.567 "method": "bdev_raid_add_base_bdev", 00:11:50.567 "req_id": 1 00:11:50.567 } 00:11:50.567 Got JSON-RPC error response 00:11:50.567 response: 00:11:50.567 { 00:11:50.567 "code": -22, 00:11:50.567 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:50.567 } 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.567 22:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.506 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.767 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.767 "name": "raid_bdev1", 00:11:51.767 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:51.767 "strip_size_kb": 0, 00:11:51.767 "state": "online", 00:11:51.767 "raid_level": "raid1", 00:11:51.767 "superblock": true, 00:11:51.767 "num_base_bdevs": 2, 00:11:51.767 "num_base_bdevs_discovered": 1, 00:11:51.767 "num_base_bdevs_operational": 1, 00:11:51.767 "base_bdevs_list": [ 00:11:51.767 { 00:11:51.767 "name": null, 00:11:51.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.767 "is_configured": false, 00:11:51.767 "data_offset": 0, 00:11:51.767 "data_size": 63488 00:11:51.767 }, 00:11:51.767 { 00:11:51.767 "name": "BaseBdev2", 00:11:51.767 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:51.767 "is_configured": true, 00:11:51.767 "data_offset": 2048, 00:11:51.767 "data_size": 63488 00:11:51.767 } 00:11:51.767 ] 00:11:51.767 }' 00:11:51.767 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.767 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.027 22:56:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.027 "name": "raid_bdev1", 00:11:52.027 "uuid": "2a0c4f57-1b98-4bef-8170-1bcb30f0fd9b", 00:11:52.027 "strip_size_kb": 0, 00:11:52.027 "state": "online", 00:11:52.027 "raid_level": "raid1", 00:11:52.027 "superblock": true, 00:11:52.027 "num_base_bdevs": 2, 00:11:52.027 "num_base_bdevs_discovered": 1, 00:11:52.027 "num_base_bdevs_operational": 1, 00:11:52.027 "base_bdevs_list": [ 00:11:52.027 { 00:11:52.027 "name": null, 00:11:52.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.027 "is_configured": false, 00:11:52.027 "data_offset": 0, 00:11:52.027 "data_size": 63488 00:11:52.027 }, 00:11:52.027 { 00:11:52.027 "name": "BaseBdev2", 00:11:52.027 "uuid": "9502c384-3f9a-592d-ba4e-b995ec21c8be", 00:11:52.027 "is_configured": true, 00:11:52.027 "data_offset": 2048, 00:11:52.027 "data_size": 63488 00:11:52.027 } 00:11:52.027 ] 00:11:52.027 }' 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88045 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88045 ']' 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88045 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.027 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88045 00:11:52.288 killing process with pid 88045 00:11:52.288 Received shutdown signal, test time was about 60.000000 seconds 00:11:52.288 00:11:52.288 Latency(us) 00:11:52.288 [2024-11-26T22:56:31.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.288 [2024-11-26T22:56:31.416Z] =================================================================================================================== 00:11:52.288 [2024-11-26T22:56:31.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:52.288 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.288 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.288 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88045' 00:11:52.288 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88045 00:11:52.288 [2024-11-26 22:56:31.154041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.288 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88045 00:11:52.288 [2024-11-26 22:56:31.154274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.288 [2024-11-26 22:56:31.154341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.288 [2024-11-26 22:56:31.154356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:52.288 [2024-11-26 22:56:31.214655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:52.549 00:11:52.549 real 0m22.379s 00:11:52.549 user 0m27.019s 00:11:52.549 sys 0m4.050s 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.549 ************************************ 00:11:52.549 END TEST raid_rebuild_test_sb 00:11:52.549 ************************************ 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.549 22:56:31 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:52.549 22:56:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:52.549 22:56:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.549 22:56:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.549 ************************************ 00:11:52.549 START TEST raid_rebuild_test_io 00:11:52.549 ************************************ 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88771 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88771 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 88771 ']' 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.549 22:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.809 [2024-11-26 22:56:31.728803] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:52.809 [2024-11-26 22:56:31.729048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:52.809 Zero copy mechanism will not be used. 00:11:52.809 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88771 ] 00:11:52.809 [2024-11-26 22:56:31.869175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:52.809 [2024-11-26 22:56:31.908686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.068 [2024-11-26 22:56:31.947884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.068 [2024-11-26 22:56:32.024340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.068 [2024-11-26 22:56:32.024497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 BaseBdev1_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 [2024-11-26 22:56:32.574807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:53.638 [2024-11-26 22:56:32.574886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.638 [2024-11-26 22:56:32.574914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:53.638 [2024-11-26 22:56:32.574933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.638 [2024-11-26 22:56:32.577539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.638 [2024-11-26 22:56:32.577581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.638 BaseBdev1 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 BaseBdev2_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 [2024-11-26 22:56:32.609515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:53.638 [2024-11-26 22:56:32.609594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.638 [2024-11-26 22:56:32.609617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:53.638 [2024-11-26 22:56:32.609630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.638 [2024-11-26 22:56:32.612094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.638 [2024-11-26 22:56:32.612139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.638 BaseBdev2 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 spare_malloc 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.638 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 spare_delay 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.639 [2024-11-26 22:56:32.656295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:53.639 [2024-11-26 22:56:32.656358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.639 [2024-11-26 22:56:32.656381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:53.639 [2024-11-26 22:56:32.656396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.639 [2024-11-26 22:56:32.658881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.639 [2024-11-26 22:56:32.658927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:53.639 spare 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.639 [2024-11-26 22:56:32.668375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.639 [2024-11-26 22:56:32.670585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.639 [2024-11-26 22:56:32.670684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:53.639 [2024-11-26 22:56:32.670707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:53.639 [2024-11-26 22:56:32.671020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:53.639 [2024-11-26 22:56:32.671177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:53.639 [2024-11-26 22:56:32.671189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:53.639 [2024-11-26 22:56:32.671355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.639 "name": "raid_bdev1", 00:11:53.639 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:53.639 "strip_size_kb": 0, 00:11:53.639 "state": "online", 00:11:53.639 "raid_level": "raid1", 00:11:53.639 "superblock": false, 00:11:53.639 "num_base_bdevs": 2, 00:11:53.639 "num_base_bdevs_discovered": 2, 00:11:53.639 "num_base_bdevs_operational": 2, 00:11:53.639 "base_bdevs_list": [ 00:11:53.639 { 00:11:53.639 "name": "BaseBdev1", 00:11:53.639 "uuid": "d1c2e751-2ce4-55e7-bf9a-c613c094c636", 00:11:53.639 "is_configured": true, 00:11:53.639 "data_offset": 0, 00:11:53.639 "data_size": 65536 00:11:53.639 }, 00:11:53.639 { 00:11:53.639 "name": "BaseBdev2", 00:11:53.639 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:53.639 "is_configured": true, 00:11:53.639 "data_offset": 0, 00:11:53.639 "data_size": 65536 00:11:53.639 } 00:11:53.639 ] 00:11:53.639 }' 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.639 22:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:54.209 [2024-11-26 22:56:33.108733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 [2024-11-26 22:56:33.208487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.209 "name": "raid_bdev1", 00:11:54.209 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:54.209 "strip_size_kb": 0, 00:11:54.209 "state": "online", 00:11:54.209 "raid_level": "raid1", 00:11:54.209 "superblock": false, 00:11:54.209 "num_base_bdevs": 2, 00:11:54.209 "num_base_bdevs_discovered": 1, 00:11:54.209 "num_base_bdevs_operational": 1, 00:11:54.209 "base_bdevs_list": [ 00:11:54.209 { 00:11:54.209 "name": null, 00:11:54.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.209 "is_configured": false, 00:11:54.209 "data_offset": 0, 00:11:54.209 "data_size": 65536 00:11:54.209 }, 00:11:54.209 { 00:11:54.209 "name": "BaseBdev2", 00:11:54.209 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:54.209 "is_configured": true, 00:11:54.209 "data_offset": 0, 00:11:54.209 "data_size": 65536 00:11:54.209 } 00:11:54.209 ] 00:11:54.209 }' 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.209 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.209 [2024-11-26 22:56:33.284801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:54.209 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:54.209 Zero copy mechanism will not be used. 00:11:54.209 Running I/O for 60 seconds... 00:11:54.779 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:54.779 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.779 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.779 [2024-11-26 22:56:33.671929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.779 22:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.779 22:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:54.779 [2024-11-26 22:56:33.714912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:54.779 [2024-11-26 22:56:33.717224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:54.779 [2024-11-26 22:56:33.819396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:54.779 [2024-11-26 22:56:33.820321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:55.039 [2024-11-26 22:56:34.028419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:55.039 [2024-11-26 22:56:34.028771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:55.298 174.00 IOPS, 522.00 MiB/s [2024-11-26T22:56:34.427Z] [2024-11-26 22:56:34.351491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:55.299 [2024-11-26 22:56:34.351958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:55.558 [2024-11-26 22:56:34.467964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:55.819 [2024-11-26 22:56:34.697615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.819 "name": "raid_bdev1", 00:11:55.819 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:55.819 "strip_size_kb": 0, 00:11:55.819 "state": "online", 00:11:55.819 "raid_level": "raid1", 00:11:55.819 "superblock": false, 00:11:55.819 "num_base_bdevs": 2, 00:11:55.819 "num_base_bdevs_discovered": 2, 00:11:55.819 "num_base_bdevs_operational": 2, 00:11:55.819 "process": { 00:11:55.819 "type": "rebuild", 00:11:55.819 "target": "spare", 00:11:55.819 "progress": { 00:11:55.819 "blocks": 14336, 00:11:55.819 "percent": 21 00:11:55.819 } 00:11:55.819 }, 00:11:55.819 "base_bdevs_list": [ 00:11:55.819 { 00:11:55.819 "name": "spare", 00:11:55.819 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:11:55.819 "is_configured": true, 00:11:55.819 "data_offset": 0, 00:11:55.819 "data_size": 65536 00:11:55.819 }, 00:11:55.819 { 00:11:55.819 "name": "BaseBdev2", 00:11:55.819 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:55.819 "is_configured": true, 00:11:55.819 "data_offset": 0, 00:11:55.819 "data_size": 65536 00:11:55.819 } 00:11:55.819 ] 00:11:55.819 }' 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.819 22:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.819 [2024-11-26 22:56:34.862045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.819 [2024-11-26 22:56:34.921720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:56.079 [2024-11-26 22:56:35.026820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:56.079 [2024-11-26 22:56:35.029898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.079 [2024-11-26 22:56:35.030012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.079 [2024-11-26 22:56:35.030031] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:56.079 [2024-11-26 22:56:35.057002] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.080 "name": "raid_bdev1", 00:11:56.080 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:56.080 "strip_size_kb": 0, 00:11:56.080 "state": "online", 00:11:56.080 "raid_level": "raid1", 00:11:56.080 "superblock": false, 00:11:56.080 "num_base_bdevs": 2, 00:11:56.080 "num_base_bdevs_discovered": 1, 00:11:56.080 "num_base_bdevs_operational": 1, 00:11:56.080 "base_bdevs_list": [ 00:11:56.080 { 00:11:56.080 "name": null, 00:11:56.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.080 "is_configured": false, 00:11:56.080 "data_offset": 0, 00:11:56.080 "data_size": 65536 00:11:56.080 }, 00:11:56.080 { 00:11:56.080 "name": "BaseBdev2", 00:11:56.080 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:56.080 "is_configured": true, 00:11:56.080 "data_offset": 0, 00:11:56.080 "data_size": 65536 00:11:56.080 } 00:11:56.080 ] 00:11:56.080 }' 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.080 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 150.00 IOPS, 450.00 MiB/s [2024-11-26T22:56:35.727Z] 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.599 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.599 "name": "raid_bdev1", 00:11:56.599 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:56.599 "strip_size_kb": 0, 00:11:56.599 "state": "online", 00:11:56.599 "raid_level": "raid1", 00:11:56.599 "superblock": false, 00:11:56.599 "num_base_bdevs": 2, 00:11:56.599 "num_base_bdevs_discovered": 1, 00:11:56.599 "num_base_bdevs_operational": 1, 00:11:56.599 "base_bdevs_list": [ 00:11:56.599 { 00:11:56.599 "name": null, 00:11:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.599 "is_configured": false, 00:11:56.599 "data_offset": 0, 00:11:56.600 "data_size": 65536 00:11:56.600 }, 00:11:56.600 { 00:11:56.600 "name": "BaseBdev2", 00:11:56.600 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:56.600 "is_configured": true, 00:11:56.600 "data_offset": 0, 00:11:56.600 "data_size": 65536 00:11:56.600 } 00:11:56.600 ] 00:11:56.600 }' 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 [2024-11-26 22:56:35.690877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 22:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:56.859 [2024-11-26 22:56:35.744599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:11:56.859 [2024-11-26 22:56:35.746941] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:56.859 [2024-11-26 22:56:35.855119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:56.859 [2024-11-26 22:56:35.855990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:56.859 [2024-11-26 22:56:35.977774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:56.859 [2024-11-26 22:56:35.978174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:57.429 177.00 IOPS, 531.00 MiB/s [2024-11-26T22:56:36.557Z] [2024-11-26 22:56:36.442298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:57.689 [2024-11-26 22:56:36.676243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.689 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.689 "name": "raid_bdev1", 00:11:57.689 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:57.689 "strip_size_kb": 0, 00:11:57.689 "state": "online", 00:11:57.689 "raid_level": "raid1", 00:11:57.689 "superblock": false, 00:11:57.689 "num_base_bdevs": 2, 00:11:57.689 "num_base_bdevs_discovered": 2, 00:11:57.689 "num_base_bdevs_operational": 2, 00:11:57.689 "process": { 00:11:57.689 "type": "rebuild", 00:11:57.689 "target": "spare", 00:11:57.689 "progress": { 00:11:57.689 "blocks": 14336, 00:11:57.689 "percent": 21 00:11:57.689 } 00:11:57.689 }, 00:11:57.689 "base_bdevs_list": [ 00:11:57.689 { 00:11:57.689 "name": "spare", 00:11:57.689 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:11:57.689 "is_configured": true, 00:11:57.689 "data_offset": 0, 00:11:57.689 "data_size": 65536 00:11:57.689 }, 00:11:57.689 { 00:11:57.690 "name": "BaseBdev2", 00:11:57.690 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:57.690 "is_configured": true, 00:11:57.690 "data_offset": 0, 00:11:57.690 "data_size": 65536 00:11:57.690 } 00:11:57.690 ] 00:11:57.690 }' 00:11:57.690 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=325 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.949 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.949 "name": "raid_bdev1", 00:11:57.949 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:57.949 "strip_size_kb": 0, 00:11:57.950 "state": "online", 00:11:57.950 "raid_level": "raid1", 00:11:57.950 "superblock": false, 00:11:57.950 "num_base_bdevs": 2, 00:11:57.950 "num_base_bdevs_discovered": 2, 00:11:57.950 "num_base_bdevs_operational": 2, 00:11:57.950 "process": { 00:11:57.950 "type": "rebuild", 00:11:57.950 "target": "spare", 00:11:57.950 "progress": { 00:11:57.950 "blocks": 14336, 00:11:57.950 "percent": 21 00:11:57.950 } 00:11:57.950 }, 00:11:57.950 "base_bdevs_list": [ 00:11:57.950 { 00:11:57.950 "name": "spare", 00:11:57.950 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:11:57.950 "is_configured": true, 00:11:57.950 "data_offset": 0, 00:11:57.950 "data_size": 65536 00:11:57.950 }, 00:11:57.950 { 00:11:57.950 "name": "BaseBdev2", 00:11:57.950 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:57.950 "is_configured": true, 00:11:57.950 "data_offset": 0, 00:11:57.950 "data_size": 65536 00:11:57.950 } 00:11:57.950 ] 00:11:57.950 }' 00:11:57.950 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.950 [2024-11-26 22:56:36.893286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:57.950 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.950 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.950 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.950 22:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:58.469 152.00 IOPS, 456.00 MiB/s [2024-11-26T22:56:37.597Z] [2024-11-26 22:56:37.345457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:58.469 [2024-11-26 22:56:37.345709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:58.729 [2024-11-26 22:56:37.672837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:58.729 [2024-11-26 22:56:37.678666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:58.989 [2024-11-26 22:56:37.907333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.989 22:56:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 22:56:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.989 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.989 "name": "raid_bdev1", 00:11:58.989 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:11:58.989 "strip_size_kb": 0, 00:11:58.989 "state": "online", 00:11:58.989 "raid_level": "raid1", 00:11:58.989 "superblock": false, 00:11:58.989 "num_base_bdevs": 2, 00:11:58.989 "num_base_bdevs_discovered": 2, 00:11:58.989 "num_base_bdevs_operational": 2, 00:11:58.989 "process": { 00:11:58.989 "type": "rebuild", 00:11:58.989 "target": "spare", 00:11:58.989 "progress": { 00:11:58.989 "blocks": 28672, 00:11:58.989 "percent": 43 00:11:58.989 } 00:11:58.989 }, 00:11:58.989 "base_bdevs_list": [ 00:11:58.989 { 00:11:58.989 "name": "spare", 00:11:58.989 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:11:58.989 "is_configured": true, 00:11:58.989 "data_offset": 0, 00:11:58.989 "data_size": 65536 00:11:58.989 }, 00:11:58.989 { 00:11:58.989 "name": "BaseBdev2", 00:11:58.989 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:11:58.989 "is_configured": true, 00:11:58.989 "data_offset": 0, 00:11:58.989 "data_size": 65536 00:11:58.989 } 00:11:58.989 ] 00:11:58.989 }' 00:11:58.989 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.989 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.989 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.249 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.249 22:56:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.249 [2024-11-26 22:56:38.220731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:59.249 [2024-11-26 22:56:38.221276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:59.509 129.60 IOPS, 388.80 MiB/s [2024-11-26T22:56:38.637Z] [2024-11-26 22:56:38.436142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:59.509 [2024-11-26 22:56:38.436433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:00.079 [2024-11-26 22:56:39.102275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.079 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.080 "name": "raid_bdev1", 00:12:00.080 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:12:00.080 "strip_size_kb": 0, 00:12:00.080 "state": "online", 00:12:00.080 "raid_level": "raid1", 00:12:00.080 "superblock": false, 00:12:00.080 "num_base_bdevs": 2, 00:12:00.080 "num_base_bdevs_discovered": 2, 00:12:00.080 "num_base_bdevs_operational": 2, 00:12:00.080 "process": { 00:12:00.080 "type": "rebuild", 00:12:00.080 "target": "spare", 00:12:00.080 "progress": { 00:12:00.080 "blocks": 45056, 00:12:00.080 "percent": 68 00:12:00.080 } 00:12:00.080 }, 00:12:00.080 "base_bdevs_list": [ 00:12:00.080 { 00:12:00.080 "name": "spare", 00:12:00.080 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:12:00.080 "is_configured": true, 00:12:00.080 "data_offset": 0, 00:12:00.080 "data_size": 65536 00:12:00.080 }, 00:12:00.080 { 00:12:00.080 "name": "BaseBdev2", 00:12:00.080 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:12:00.080 "is_configured": true, 00:12:00.080 "data_offset": 0, 00:12:00.080 "data_size": 65536 00:12:00.080 } 00:12:00.080 ] 00:12:00.080 }' 00:12:00.080 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.339 [2024-11-26 22:56:39.216815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:00.339 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.339 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.339 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.339 22:56:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:00.908 114.83 IOPS, 344.50 MiB/s [2024-11-26T22:56:40.036Z] [2024-11-26 22:56:39.779630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:01.477 103.43 IOPS, 310.29 MiB/s [2024-11-26T22:56:40.605Z] 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.477 [2024-11-26 22:56:40.328403] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.477 "name": "raid_bdev1", 00:12:01.477 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:12:01.477 "strip_size_kb": 0, 00:12:01.477 "state": "online", 00:12:01.477 "raid_level": "raid1", 00:12:01.477 "superblock": false, 00:12:01.477 "num_base_bdevs": 2, 00:12:01.477 "num_base_bdevs_discovered": 2, 00:12:01.477 "num_base_bdevs_operational": 2, 00:12:01.477 "process": { 00:12:01.477 "type": "rebuild", 00:12:01.477 "target": "spare", 00:12:01.477 "progress": { 00:12:01.477 "blocks": 63488, 00:12:01.477 "percent": 96 00:12:01.477 } 00:12:01.477 }, 00:12:01.477 "base_bdevs_list": [ 00:12:01.477 { 00:12:01.477 "name": "spare", 00:12:01.477 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:12:01.477 "is_configured": true, 00:12:01.477 "data_offset": 0, 00:12:01.477 "data_size": 65536 00:12:01.477 }, 00:12:01.477 { 00:12:01.477 "name": "BaseBdev2", 00:12:01.477 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:12:01.477 "is_configured": true, 00:12:01.477 "data_offset": 0, 00:12:01.477 "data_size": 65536 00:12:01.477 } 00:12:01.477 ] 00:12:01.477 }' 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.477 [2024-11-26 22:56:40.428444] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:01.477 [2024-11-26 22:56:40.431212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.477 22:56:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.417 94.75 IOPS, 284.25 MiB/s [2024-11-26T22:56:41.545Z] 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.417 "name": "raid_bdev1", 00:12:02.417 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:12:02.417 "strip_size_kb": 0, 00:12:02.417 "state": "online", 00:12:02.417 "raid_level": "raid1", 00:12:02.417 "superblock": false, 00:12:02.417 "num_base_bdevs": 2, 00:12:02.417 "num_base_bdevs_discovered": 2, 00:12:02.417 "num_base_bdevs_operational": 2, 00:12:02.417 "base_bdevs_list": [ 00:12:02.417 { 00:12:02.417 "name": "spare", 00:12:02.417 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 0, 00:12:02.417 "data_size": 65536 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev2", 00:12:02.417 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 0, 00:12:02.417 "data_size": 65536 00:12:02.417 } 00:12:02.417 ] 00:12:02.417 }' 00:12:02.417 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.677 "name": "raid_bdev1", 00:12:02.677 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:12:02.677 "strip_size_kb": 0, 00:12:02.677 "state": "online", 00:12:02.677 "raid_level": "raid1", 00:12:02.677 "superblock": false, 00:12:02.677 "num_base_bdevs": 2, 00:12:02.677 "num_base_bdevs_discovered": 2, 00:12:02.677 "num_base_bdevs_operational": 2, 00:12:02.677 "base_bdevs_list": [ 00:12:02.677 { 00:12:02.677 "name": "spare", 00:12:02.677 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:12:02.677 "is_configured": true, 00:12:02.677 "data_offset": 0, 00:12:02.677 "data_size": 65536 00:12:02.677 }, 00:12:02.677 { 00:12:02.677 "name": "BaseBdev2", 00:12:02.677 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:12:02.677 "is_configured": true, 00:12:02.677 "data_offset": 0, 00:12:02.677 "data_size": 65536 00:12:02.677 } 00:12:02.677 ] 00:12:02.677 }' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.677 "name": "raid_bdev1", 00:12:02.677 "uuid": "d6c5f82e-ec97-4166-88cc-ffcd7c4675b7", 00:12:02.677 "strip_size_kb": 0, 00:12:02.677 "state": "online", 00:12:02.677 "raid_level": "raid1", 00:12:02.677 "superblock": false, 00:12:02.677 "num_base_bdevs": 2, 00:12:02.677 "num_base_bdevs_discovered": 2, 00:12:02.677 "num_base_bdevs_operational": 2, 00:12:02.677 "base_bdevs_list": [ 00:12:02.677 { 00:12:02.677 "name": "spare", 00:12:02.677 "uuid": "ed224736-8923-589d-b8d1-6bee82264f21", 00:12:02.677 "is_configured": true, 00:12:02.677 "data_offset": 0, 00:12:02.677 "data_size": 65536 00:12:02.677 }, 00:12:02.677 { 00:12:02.677 "name": "BaseBdev2", 00:12:02.677 "uuid": "7f01103f-4327-5516-ac81-6dcf39f14beb", 00:12:02.677 "is_configured": true, 00:12:02.677 "data_offset": 0, 00:12:02.677 "data_size": 65536 00:12:02.677 } 00:12:02.677 ] 00:12:02.677 }' 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.677 22:56:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.248 [2024-11-26 22:56:42.188900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.248 [2024-11-26 22:56:42.188995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.248 00:12:03.248 Latency(us) 00:12:03.248 [2024-11-26T22:56:42.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.248 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:03.248 raid_bdev1 : 8.98 88.82 266.45 0.00 0.00 15499.82 274.90 112872.95 00:12:03.248 [2024-11-26T22:56:42.376Z] =================================================================================================================== 00:12:03.248 [2024-11-26T22:56:42.376Z] Total : 88.82 266.45 0.00 0.00 15499.82 274.90 112872.95 00:12:03.248 [2024-11-26 22:56:42.276660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.248 [2024-11-26 22:56:42.276765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.248 [2024-11-26 22:56:42.276870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.248 [2024-11-26 22:56:42.276943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:03.248 { 00:12:03.248 "results": [ 00:12:03.248 { 00:12:03.248 "job": "raid_bdev1", 00:12:03.248 "core_mask": "0x1", 00:12:03.248 "workload": "randrw", 00:12:03.248 "percentage": 50, 00:12:03.248 "status": "finished", 00:12:03.248 "queue_depth": 2, 00:12:03.248 "io_size": 3145728, 00:12:03.248 "runtime": 8.984778, 00:12:03.248 "iops": 88.81688562588859, 00:12:03.248 "mibps": 266.4506568776658, 00:12:03.248 "io_failed": 0, 00:12:03.248 "io_timeout": 0, 00:12:03.248 "avg_latency_us": 15499.816323690775, 00:12:03.248 "min_latency_us": 274.8993288590604, 00:12:03.248 "max_latency_us": 112872.9504052994 00:12:03.248 } 00:12:03.248 ], 00:12:03.248 "core_count": 1 00:12:03.248 } 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.248 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:03.507 /dev/nbd0 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.507 1+0 records in 00:12:03.507 1+0 records out 00:12:03.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446929 s, 9.2 MB/s 00:12:03.507 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.508 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:03.767 /dev/nbd1 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.767 1+0 records in 00:12:03.767 1+0 records out 00:12:03.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556665 s, 7.4 MB/s 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.767 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.028 22:56:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.028 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88771 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 88771 ']' 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 88771 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88771 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88771' 00:12:04.328 killing process with pid 88771 00:12:04.328 Received shutdown signal, test time was about 10.101015 seconds 00:12:04.328 00:12:04.328 Latency(us) 00:12:04.328 [2024-11-26T22:56:43.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.328 [2024-11-26T22:56:43.456Z] =================================================================================================================== 00:12:04.328 [2024-11-26T22:56:43.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 88771 00:12:04.328 [2024-11-26 22:56:43.389144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.328 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 88771 00:12:04.328 [2024-11-26 22:56:43.438301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:04.922 00:12:04.922 real 0m12.144s 00:12:04.922 user 0m15.262s 00:12:04.922 sys 0m1.599s 00:12:04.922 ************************************ 00:12:04.922 END TEST raid_rebuild_test_io 00:12:04.922 ************************************ 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 22:56:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:04.922 22:56:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:04.922 22:56:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.922 22:56:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 ************************************ 00:12:04.922 START TEST raid_rebuild_test_sb_io 00:12:04.922 ************************************ 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89157 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89157 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89157 ']' 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.922 22:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:04.923 Zero copy mechanism will not be used. 00:12:04.923 [2024-11-26 22:56:43.956847] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:04.923 [2024-11-26 22:56:43.956995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89157 ] 00:12:05.182 [2024-11-26 22:56:44.099073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.182 [2024-11-26 22:56:44.136445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.182 [2024-11-26 22:56:44.175207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.182 [2024-11-26 22:56:44.254303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.182 [2024-11-26 22:56:44.254370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 BaseBdev1_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 [2024-11-26 22:56:44.801114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:05.752 [2024-11-26 22:56:44.801237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.752 [2024-11-26 22:56:44.801292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.752 [2024-11-26 22:56:44.801314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.752 [2024-11-26 22:56:44.803793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.752 [2024-11-26 22:56:44.803905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.752 BaseBdev1 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 BaseBdev2_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 [2024-11-26 22:56:44.835800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:05.752 [2024-11-26 22:56:44.835886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.752 [2024-11-26 22:56:44.835909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:05.752 [2024-11-26 22:56:44.835923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.752 [2024-11-26 22:56:44.838330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.752 [2024-11-26 22:56:44.838372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.752 BaseBdev2 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 spare_malloc 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.752 spare_delay 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.752 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.012 [2024-11-26 22:56:44.882544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.012 [2024-11-26 22:56:44.882614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.012 [2024-11-26 22:56:44.882635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:06.012 [2024-11-26 22:56:44.882651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.012 [2024-11-26 22:56:44.885096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.012 [2024-11-26 22:56:44.885144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.012 spare 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.012 [2024-11-26 22:56:44.894594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.012 [2024-11-26 22:56:44.896757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.012 [2024-11-26 22:56:44.896972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:06.012 [2024-11-26 22:56:44.897029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.012 [2024-11-26 22:56:44.897353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:06.012 [2024-11-26 22:56:44.897567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:06.012 [2024-11-26 22:56:44.897622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:06.012 [2024-11-26 22:56:44.897790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.012 "name": "raid_bdev1", 00:12:06.012 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:06.012 "strip_size_kb": 0, 00:12:06.012 "state": "online", 00:12:06.012 "raid_level": "raid1", 00:12:06.012 "superblock": true, 00:12:06.012 "num_base_bdevs": 2, 00:12:06.012 "num_base_bdevs_discovered": 2, 00:12:06.012 "num_base_bdevs_operational": 2, 00:12:06.012 "base_bdevs_list": [ 00:12:06.012 { 00:12:06.012 "name": "BaseBdev1", 00:12:06.012 "uuid": "02355fb6-92a6-568e-a815-d8f45cfb4a4a", 00:12:06.012 "is_configured": true, 00:12:06.012 "data_offset": 2048, 00:12:06.012 "data_size": 63488 00:12:06.012 }, 00:12:06.012 { 00:12:06.012 "name": "BaseBdev2", 00:12:06.012 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:06.012 "is_configured": true, 00:12:06.012 "data_offset": 2048, 00:12:06.012 "data_size": 63488 00:12:06.012 } 00:12:06.012 ] 00:12:06.012 }' 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.012 22:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.272 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:06.272 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.272 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.272 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.272 [2024-11-26 22:56:45.366956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.272 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 [2024-11-26 22:56:45.462687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.532 "name": "raid_bdev1", 00:12:06.532 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:06.532 "strip_size_kb": 0, 00:12:06.532 "state": "online", 00:12:06.532 "raid_level": "raid1", 00:12:06.532 "superblock": true, 00:12:06.532 "num_base_bdevs": 2, 00:12:06.532 "num_base_bdevs_discovered": 1, 00:12:06.532 "num_base_bdevs_operational": 1, 00:12:06.532 "base_bdevs_list": [ 00:12:06.532 { 00:12:06.532 "name": null, 00:12:06.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.532 "is_configured": false, 00:12:06.532 "data_offset": 0, 00:12:06.532 "data_size": 63488 00:12:06.532 }, 00:12:06.532 { 00:12:06.532 "name": "BaseBdev2", 00:12:06.532 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:06.532 "is_configured": true, 00:12:06.532 "data_offset": 2048, 00:12:06.532 "data_size": 63488 00:12:06.532 } 00:12:06.532 ] 00:12:06.532 }' 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.532 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 [2024-11-26 22:56:45.563609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:06.532 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.532 Zero copy mechanism will not be used. 00:12:06.532 Running I/O for 60 seconds... 00:12:06.792 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:06.792 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.792 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.792 [2024-11-26 22:56:45.911918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.052 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.053 22:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:07.053 [2024-11-26 22:56:45.965955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:07.053 [2024-11-26 22:56:45.968359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:07.053 [2024-11-26 22:56:46.092746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.053 [2024-11-26 22:56:46.093446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.312 [2024-11-26 22:56:46.215018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.312 [2024-11-26 22:56:46.215243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.572 243.00 IOPS, 729.00 MiB/s [2024-11-26T22:56:46.700Z] [2024-11-26 22:56:46.571330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:07.572 [2024-11-26 22:56:46.571806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:07.832 [2024-11-26 22:56:46.905055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:07.832 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.832 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.832 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.832 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.832 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.092 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.092 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.092 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.092 22:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.092 "name": "raid_bdev1", 00:12:08.092 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:08.092 "strip_size_kb": 0, 00:12:08.092 "state": "online", 00:12:08.092 "raid_level": "raid1", 00:12:08.092 "superblock": true, 00:12:08.092 "num_base_bdevs": 2, 00:12:08.092 "num_base_bdevs_discovered": 2, 00:12:08.092 "num_base_bdevs_operational": 2, 00:12:08.092 "process": { 00:12:08.092 "type": "rebuild", 00:12:08.092 "target": "spare", 00:12:08.092 "progress": { 00:12:08.092 "blocks": 14336, 00:12:08.092 "percent": 22 00:12:08.092 } 00:12:08.092 }, 00:12:08.092 "base_bdevs_list": [ 00:12:08.092 { 00:12:08.092 "name": "spare", 00:12:08.092 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:08.092 "is_configured": true, 00:12:08.092 "data_offset": 2048, 00:12:08.092 "data_size": 63488 00:12:08.092 }, 00:12:08.092 { 00:12:08.092 "name": "BaseBdev2", 00:12:08.092 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:08.092 "is_configured": true, 00:12:08.092 "data_offset": 2048, 00:12:08.092 "data_size": 63488 00:12:08.092 } 00:12:08.092 ] 00:12:08.092 }' 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.092 [2024-11-26 22:56:47.018742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 [2024-11-26 22:56:47.115106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.092 [2024-11-26 22:56:47.149514] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.092 [2024-11-26 22:56:47.157509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.092 [2024-11-26 22:56:47.157553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.092 [2024-11-26 22:56:47.157585] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.092 [2024-11-26 22:56:47.174111] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.352 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.352 "name": "raid_bdev1", 00:12:08.352 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:08.352 "strip_size_kb": 0, 00:12:08.352 "state": "online", 00:12:08.352 "raid_level": "raid1", 00:12:08.352 "superblock": true, 00:12:08.352 "num_base_bdevs": 2, 00:12:08.352 "num_base_bdevs_discovered": 1, 00:12:08.352 "num_base_bdevs_operational": 1, 00:12:08.352 "base_bdevs_list": [ 00:12:08.352 { 00:12:08.352 "name": null, 00:12:08.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.352 "is_configured": false, 00:12:08.352 "data_offset": 0, 00:12:08.352 "data_size": 63488 00:12:08.352 }, 00:12:08.352 { 00:12:08.352 "name": "BaseBdev2", 00:12:08.352 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:08.352 "is_configured": true, 00:12:08.352 "data_offset": 2048, 00:12:08.352 "data_size": 63488 00:12:08.352 } 00:12:08.352 ] 00:12:08.352 }' 00:12:08.352 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.352 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 214.50 IOPS, 643.50 MiB/s [2024-11-26T22:56:47.740Z] 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.612 "name": "raid_bdev1", 00:12:08.612 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:08.612 "strip_size_kb": 0, 00:12:08.612 "state": "online", 00:12:08.612 "raid_level": "raid1", 00:12:08.612 "superblock": true, 00:12:08.612 "num_base_bdevs": 2, 00:12:08.612 "num_base_bdevs_discovered": 1, 00:12:08.612 "num_base_bdevs_operational": 1, 00:12:08.612 "base_bdevs_list": [ 00:12:08.612 { 00:12:08.612 "name": null, 00:12:08.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.612 "is_configured": false, 00:12:08.612 "data_offset": 0, 00:12:08.612 "data_size": 63488 00:12:08.612 }, 00:12:08.612 { 00:12:08.612 "name": "BaseBdev2", 00:12:08.612 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:08.612 "is_configured": true, 00:12:08.612 "data_offset": 2048, 00:12:08.612 "data_size": 63488 00:12:08.612 } 00:12:08.612 ] 00:12:08.612 }' 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.612 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.871 [2024-11-26 22:56:47.775546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.871 22:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:08.871 [2024-11-26 22:56:47.814332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:08.871 [2024-11-26 22:56:47.816713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.871 [2024-11-26 22:56:47.929717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:08.871 [2024-11-26 22:56:47.930132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.131 [2024-11-26 22:56:48.137665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.131 [2024-11-26 22:56:48.137824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.390 [2024-11-26 22:56:48.473820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.650 180.33 IOPS, 541.00 MiB/s [2024-11-26T22:56:48.778Z] [2024-11-26 22:56:48.604809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:09.650 [2024-11-26 22:56:48.605037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.910 "name": "raid_bdev1", 00:12:09.910 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:09.910 "strip_size_kb": 0, 00:12:09.910 "state": "online", 00:12:09.910 "raid_level": "raid1", 00:12:09.910 "superblock": true, 00:12:09.910 "num_base_bdevs": 2, 00:12:09.910 "num_base_bdevs_discovered": 2, 00:12:09.910 "num_base_bdevs_operational": 2, 00:12:09.910 "process": { 00:12:09.910 "type": "rebuild", 00:12:09.910 "target": "spare", 00:12:09.910 "progress": { 00:12:09.910 "blocks": 12288, 00:12:09.910 "percent": 19 00:12:09.910 } 00:12:09.910 }, 00:12:09.910 "base_bdevs_list": [ 00:12:09.910 { 00:12:09.910 "name": "spare", 00:12:09.910 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:09.910 "is_configured": true, 00:12:09.910 "data_offset": 2048, 00:12:09.910 "data_size": 63488 00:12:09.910 }, 00:12:09.910 { 00:12:09.910 "name": "BaseBdev2", 00:12:09.910 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:09.910 "is_configured": true, 00:12:09.910 "data_offset": 2048, 00:12:09.910 "data_size": 63488 00:12:09.910 } 00:12:09.910 ] 00:12:09.910 }' 00:12:09.910 [2024-11-26 22:56:48.850986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:09.910 [2024-11-26 22:56:48.851831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:09.910 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:09.911 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=337 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.911 "name": "raid_bdev1", 00:12:09.911 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:09.911 "strip_size_kb": 0, 00:12:09.911 "state": "online", 00:12:09.911 "raid_level": "raid1", 00:12:09.911 "superblock": true, 00:12:09.911 "num_base_bdevs": 2, 00:12:09.911 "num_base_bdevs_discovered": 2, 00:12:09.911 "num_base_bdevs_operational": 2, 00:12:09.911 "process": { 00:12:09.911 "type": "rebuild", 00:12:09.911 "target": "spare", 00:12:09.911 "progress": { 00:12:09.911 "blocks": 14336, 00:12:09.911 "percent": 22 00:12:09.911 } 00:12:09.911 }, 00:12:09.911 "base_bdevs_list": [ 00:12:09.911 { 00:12:09.911 "name": "spare", 00:12:09.911 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:09.911 "is_configured": true, 00:12:09.911 "data_offset": 2048, 00:12:09.911 "data_size": 63488 00:12:09.911 }, 00:12:09.911 { 00:12:09.911 "name": "BaseBdev2", 00:12:09.911 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:09.911 "is_configured": true, 00:12:09.911 "data_offset": 2048, 00:12:09.911 "data_size": 63488 00:12:09.911 } 00:12:09.911 ] 00:12:09.911 }' 00:12:09.911 22:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.911 22:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.911 22:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.171 22:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.171 22:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:10.171 [2024-11-26 22:56:49.067782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:10.171 [2024-11-26 22:56:49.068032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:10.430 [2024-11-26 22:56:49.385588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:10.430 [2024-11-26 22:56:49.504929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:10.430 [2024-11-26 22:56:49.505242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:10.949 147.00 IOPS, 441.00 MiB/s [2024-11-26T22:56:50.077Z] 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.949 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.208 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.208 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.208 "name": "raid_bdev1", 00:12:11.208 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:11.208 "strip_size_kb": 0, 00:12:11.208 "state": "online", 00:12:11.208 "raid_level": "raid1", 00:12:11.208 "superblock": true, 00:12:11.208 "num_base_bdevs": 2, 00:12:11.208 "num_base_bdevs_discovered": 2, 00:12:11.208 "num_base_bdevs_operational": 2, 00:12:11.208 "process": { 00:12:11.208 "type": "rebuild", 00:12:11.208 "target": "spare", 00:12:11.208 "progress": { 00:12:11.208 "blocks": 30720, 00:12:11.208 "percent": 48 00:12:11.208 } 00:12:11.208 }, 00:12:11.208 "base_bdevs_list": [ 00:12:11.208 { 00:12:11.208 "name": "spare", 00:12:11.209 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:11.209 "is_configured": true, 00:12:11.209 "data_offset": 2048, 00:12:11.209 "data_size": 63488 00:12:11.209 }, 00:12:11.209 { 00:12:11.209 "name": "BaseBdev2", 00:12:11.209 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:11.209 "is_configured": true, 00:12:11.209 "data_offset": 2048, 00:12:11.209 "data_size": 63488 00:12:11.209 } 00:12:11.209 ] 00:12:11.209 }' 00:12:11.209 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.209 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.209 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.209 [2024-11-26 22:56:50.185386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:11.209 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.209 22:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.209 [2024-11-26 22:56:50.304371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:12.038 128.00 IOPS, 384.00 MiB/s [2024-11-26T22:56:51.166Z] [2024-11-26 22:56:50.979896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.298 "name": "raid_bdev1", 00:12:12.298 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:12.298 "strip_size_kb": 0, 00:12:12.298 "state": "online", 00:12:12.298 "raid_level": "raid1", 00:12:12.298 "superblock": true, 00:12:12.298 "num_base_bdevs": 2, 00:12:12.298 "num_base_bdevs_discovered": 2, 00:12:12.298 "num_base_bdevs_operational": 2, 00:12:12.298 "process": { 00:12:12.298 "type": "rebuild", 00:12:12.298 "target": "spare", 00:12:12.298 "progress": { 00:12:12.298 "blocks": 49152, 00:12:12.298 "percent": 77 00:12:12.298 } 00:12:12.298 }, 00:12:12.298 "base_bdevs_list": [ 00:12:12.298 { 00:12:12.298 "name": "spare", 00:12:12.298 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 }, 00:12:12.298 { 00:12:12.298 "name": "BaseBdev2", 00:12:12.298 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 } 00:12:12.298 ] 00:12:12.298 }' 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.298 22:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.127 114.00 IOPS, 342.00 MiB/s [2024-11-26T22:56:52.255Z] [2024-11-26 22:56:51.957286] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:13.127 [2024-11-26 22:56:52.062626] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:13.127 [2024-11-26 22:56:52.066698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.386 "name": "raid_bdev1", 00:12:13.386 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:13.386 "strip_size_kb": 0, 00:12:13.386 "state": "online", 00:12:13.386 "raid_level": "raid1", 00:12:13.386 "superblock": true, 00:12:13.386 "num_base_bdevs": 2, 00:12:13.386 "num_base_bdevs_discovered": 2, 00:12:13.386 "num_base_bdevs_operational": 2, 00:12:13.386 "base_bdevs_list": [ 00:12:13.386 { 00:12:13.386 "name": "spare", 00:12:13.386 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:13.386 "is_configured": true, 00:12:13.386 "data_offset": 2048, 00:12:13.386 "data_size": 63488 00:12:13.386 }, 00:12:13.386 { 00:12:13.386 "name": "BaseBdev2", 00:12:13.386 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:13.386 "is_configured": true, 00:12:13.386 "data_offset": 2048, 00:12:13.386 "data_size": 63488 00:12:13.386 } 00:12:13.386 ] 00:12:13.386 }' 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.386 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.646 "name": "raid_bdev1", 00:12:13.646 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:13.646 "strip_size_kb": 0, 00:12:13.646 "state": "online", 00:12:13.646 "raid_level": "raid1", 00:12:13.646 "superblock": true, 00:12:13.646 "num_base_bdevs": 2, 00:12:13.646 "num_base_bdevs_discovered": 2, 00:12:13.646 "num_base_bdevs_operational": 2, 00:12:13.646 "base_bdevs_list": [ 00:12:13.646 { 00:12:13.646 "name": "spare", 00:12:13.646 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:13.646 "is_configured": true, 00:12:13.646 "data_offset": 2048, 00:12:13.646 "data_size": 63488 00:12:13.646 }, 00:12:13.646 { 00:12:13.646 "name": "BaseBdev2", 00:12:13.646 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:13.646 "is_configured": true, 00:12:13.646 "data_offset": 2048, 00:12:13.646 "data_size": 63488 00:12:13.646 } 00:12:13.646 ] 00:12:13.646 }' 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.646 101.71 IOPS, 305.14 MiB/s [2024-11-26T22:56:52.774Z] 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.646 "name": "raid_bdev1", 00:12:13.646 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:13.646 "strip_size_kb": 0, 00:12:13.646 "state": "online", 00:12:13.646 "raid_level": "raid1", 00:12:13.646 "superblock": true, 00:12:13.646 "num_base_bdevs": 2, 00:12:13.646 "num_base_bdevs_discovered": 2, 00:12:13.646 "num_base_bdevs_operational": 2, 00:12:13.646 "base_bdevs_list": [ 00:12:13.646 { 00:12:13.646 "name": "spare", 00:12:13.646 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:13.646 "is_configured": true, 00:12:13.646 "data_offset": 2048, 00:12:13.646 "data_size": 63488 00:12:13.646 }, 00:12:13.646 { 00:12:13.646 "name": "BaseBdev2", 00:12:13.646 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:13.646 "is_configured": true, 00:12:13.646 "data_offset": 2048, 00:12:13.646 "data_size": 63488 00:12:13.646 } 00:12:13.646 ] 00:12:13.646 }' 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.646 22:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.215 [2024-11-26 22:56:53.092306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.215 [2024-11-26 22:56:53.092423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.215 00:12:14.215 Latency(us) 00:12:14.215 [2024-11-26T22:56:53.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.215 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:14.215 raid_bdev1 : 7.59 96.51 289.52 0.00 0.00 13952.38 269.54 110131.10 00:12:14.215 [2024-11-26T22:56:53.343Z] =================================================================================================================== 00:12:14.215 [2024-11-26T22:56:53.343Z] Total : 96.51 289.52 0.00 0.00 13952.38 269.54 110131.10 00:12:14.215 [2024-11-26 22:56:53.156339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.215 [2024-11-26 22:56:53.156456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.215 [2024-11-26 22:56:53.156574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.215 [2024-11-26 22:56:53.156661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:14.215 { 00:12:14.215 "results": [ 00:12:14.215 { 00:12:14.215 "job": "raid_bdev1", 00:12:14.215 "core_mask": "0x1", 00:12:14.215 "workload": "randrw", 00:12:14.215 "percentage": 50, 00:12:14.215 "status": "finished", 00:12:14.215 "queue_depth": 2, 00:12:14.215 "io_size": 3145728, 00:12:14.215 "runtime": 7.585084, 00:12:14.215 "iops": 96.50519361420388, 00:12:14.215 "mibps": 289.5155808426116, 00:12:14.215 "io_failed": 0, 00:12:14.215 "io_timeout": 0, 00:12:14.215 "avg_latency_us": 13952.37609421788, 00:12:14.215 "min_latency_us": 269.54414712803975, 00:12:14.215 "max_latency_us": 110131.09735901683 00:12:14.215 } 00:12:14.215 ], 00:12:14.215 "core_count": 1 00:12:14.215 } 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.215 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:14.475 /dev/nbd0 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.475 1+0 records in 00:12:14.475 1+0 records out 00:12:14.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366298 s, 11.2 MB/s 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.475 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:14.735 /dev/nbd1 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.735 1+0 records in 00:12:14.735 1+0 records out 00:12:14.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302218 s, 13.6 MB/s 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.735 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.736 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.996 22:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.996 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.256 [2024-11-26 22:56:54.240779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:15.256 [2024-11-26 22:56:54.240939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.256 [2024-11-26 22:56:54.240988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:15.256 [2024-11-26 22:56:54.241030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.256 [2024-11-26 22:56:54.243636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.256 [2024-11-26 22:56:54.243726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:15.256 [2024-11-26 22:56:54.243865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:15.256 [2024-11-26 22:56:54.243950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.256 [2024-11-26 22:56:54.244143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.256 spare 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.256 [2024-11-26 22:56:54.344305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.256 [2024-11-26 22:56:54.344379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.256 [2024-11-26 22:56:54.344755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:12:15.256 [2024-11-26 22:56:54.344957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.256 [2024-11-26 22:56:54.345016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.256 [2024-11-26 22:56:54.345225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.256 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.516 "name": "raid_bdev1", 00:12:15.516 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:15.516 "strip_size_kb": 0, 00:12:15.516 "state": "online", 00:12:15.516 "raid_level": "raid1", 00:12:15.516 "superblock": true, 00:12:15.516 "num_base_bdevs": 2, 00:12:15.516 "num_base_bdevs_discovered": 2, 00:12:15.516 "num_base_bdevs_operational": 2, 00:12:15.516 "base_bdevs_list": [ 00:12:15.516 { 00:12:15.516 "name": "spare", 00:12:15.516 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:15.516 "is_configured": true, 00:12:15.516 "data_offset": 2048, 00:12:15.516 "data_size": 63488 00:12:15.516 }, 00:12:15.516 { 00:12:15.516 "name": "BaseBdev2", 00:12:15.516 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:15.516 "is_configured": true, 00:12:15.516 "data_offset": 2048, 00:12:15.516 "data_size": 63488 00:12:15.516 } 00:12:15.516 ] 00:12:15.516 }' 00:12:15.516 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.516 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.775 "name": "raid_bdev1", 00:12:15.775 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:15.775 "strip_size_kb": 0, 00:12:15.775 "state": "online", 00:12:15.775 "raid_level": "raid1", 00:12:15.775 "superblock": true, 00:12:15.775 "num_base_bdevs": 2, 00:12:15.775 "num_base_bdevs_discovered": 2, 00:12:15.775 "num_base_bdevs_operational": 2, 00:12:15.775 "base_bdevs_list": [ 00:12:15.775 { 00:12:15.775 "name": "spare", 00:12:15.775 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:15.775 "is_configured": true, 00:12:15.775 "data_offset": 2048, 00:12:15.775 "data_size": 63488 00:12:15.775 }, 00:12:15.775 { 00:12:15.775 "name": "BaseBdev2", 00:12:15.775 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:15.775 "is_configured": true, 00:12:15.775 "data_offset": 2048, 00:12:15.775 "data_size": 63488 00:12:15.775 } 00:12:15.775 ] 00:12:15.775 }' 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.775 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.035 [2024-11-26 22:56:54.981514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.035 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.036 22:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.036 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.036 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.036 "name": "raid_bdev1", 00:12:16.036 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:16.036 "strip_size_kb": 0, 00:12:16.036 "state": "online", 00:12:16.036 "raid_level": "raid1", 00:12:16.036 "superblock": true, 00:12:16.036 "num_base_bdevs": 2, 00:12:16.036 "num_base_bdevs_discovered": 1, 00:12:16.036 "num_base_bdevs_operational": 1, 00:12:16.036 "base_bdevs_list": [ 00:12:16.036 { 00:12:16.036 "name": null, 00:12:16.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.036 "is_configured": false, 00:12:16.036 "data_offset": 0, 00:12:16.036 "data_size": 63488 00:12:16.036 }, 00:12:16.036 { 00:12:16.036 "name": "BaseBdev2", 00:12:16.036 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:16.036 "is_configured": true, 00:12:16.036 "data_offset": 2048, 00:12:16.036 "data_size": 63488 00:12:16.036 } 00:12:16.036 ] 00:12:16.036 }' 00:12:16.036 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.036 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.295 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.295 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.295 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.295 [2024-11-26 22:56:55.349691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.295 [2024-11-26 22:56:55.350010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:16.295 [2024-11-26 22:56:55.350084] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:16.296 [2024-11-26 22:56:55.350160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.296 [2024-11-26 22:56:55.359674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:12:16.296 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.296 22:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:16.296 [2024-11-26 22:56:55.362044] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.675 "name": "raid_bdev1", 00:12:17.675 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:17.675 "strip_size_kb": 0, 00:12:17.675 "state": "online", 00:12:17.675 "raid_level": "raid1", 00:12:17.675 "superblock": true, 00:12:17.675 "num_base_bdevs": 2, 00:12:17.675 "num_base_bdevs_discovered": 2, 00:12:17.675 "num_base_bdevs_operational": 2, 00:12:17.675 "process": { 00:12:17.675 "type": "rebuild", 00:12:17.675 "target": "spare", 00:12:17.675 "progress": { 00:12:17.675 "blocks": 20480, 00:12:17.675 "percent": 32 00:12:17.675 } 00:12:17.675 }, 00:12:17.675 "base_bdevs_list": [ 00:12:17.675 { 00:12:17.675 "name": "spare", 00:12:17.675 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:17.675 "is_configured": true, 00:12:17.675 "data_offset": 2048, 00:12:17.675 "data_size": 63488 00:12:17.675 }, 00:12:17.675 { 00:12:17.675 "name": "BaseBdev2", 00:12:17.675 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:17.675 "is_configured": true, 00:12:17.675 "data_offset": 2048, 00:12:17.675 "data_size": 63488 00:12:17.675 } 00:12:17.675 ] 00:12:17.675 }' 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.675 [2024-11-26 22:56:56.521813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.675 [2024-11-26 22:56:56.571956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.675 [2024-11-26 22:56:56.572042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.675 [2024-11-26 22:56:56.572065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.675 [2024-11-26 22:56:56.572074] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.675 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.675 "name": "raid_bdev1", 00:12:17.675 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:17.675 "strip_size_kb": 0, 00:12:17.675 "state": "online", 00:12:17.675 "raid_level": "raid1", 00:12:17.675 "superblock": true, 00:12:17.675 "num_base_bdevs": 2, 00:12:17.675 "num_base_bdevs_discovered": 1, 00:12:17.675 "num_base_bdevs_operational": 1, 00:12:17.675 "base_bdevs_list": [ 00:12:17.675 { 00:12:17.675 "name": null, 00:12:17.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.675 "is_configured": false, 00:12:17.675 "data_offset": 0, 00:12:17.675 "data_size": 63488 00:12:17.675 }, 00:12:17.675 { 00:12:17.675 "name": "BaseBdev2", 00:12:17.675 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:17.675 "is_configured": true, 00:12:17.675 "data_offset": 2048, 00:12:17.675 "data_size": 63488 00:12:17.676 } 00:12:17.676 ] 00:12:17.676 }' 00:12:17.676 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.676 22:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.245 22:56:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.245 22:56:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.245 22:56:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.245 [2024-11-26 22:56:57.068743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.245 [2024-11-26 22:56:57.068910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.245 [2024-11-26 22:56:57.068960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:18.245 [2024-11-26 22:56:57.068998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.245 [2024-11-26 22:56:57.069594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.245 [2024-11-26 22:56:57.069664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.245 [2024-11-26 22:56:57.069812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:18.245 [2024-11-26 22:56:57.069860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:18.245 [2024-11-26 22:56:57.069918] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:18.245 [2024-11-26 22:56:57.069997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.245 [2024-11-26 22:56:57.079504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:12:18.245 spare 00:12:18.245 22:56:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.245 22:56:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:18.245 [2024-11-26 22:56:57.081819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.183 "name": "raid_bdev1", 00:12:19.183 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:19.183 "strip_size_kb": 0, 00:12:19.183 "state": "online", 00:12:19.183 "raid_level": "raid1", 00:12:19.183 "superblock": true, 00:12:19.183 "num_base_bdevs": 2, 00:12:19.183 "num_base_bdevs_discovered": 2, 00:12:19.183 "num_base_bdevs_operational": 2, 00:12:19.183 "process": { 00:12:19.183 "type": "rebuild", 00:12:19.183 "target": "spare", 00:12:19.183 "progress": { 00:12:19.183 "blocks": 20480, 00:12:19.183 "percent": 32 00:12:19.183 } 00:12:19.183 }, 00:12:19.183 "base_bdevs_list": [ 00:12:19.183 { 00:12:19.183 "name": "spare", 00:12:19.183 "uuid": "17d47b41-b161-5ef8-ba5e-31b00c2239f1", 00:12:19.183 "is_configured": true, 00:12:19.183 "data_offset": 2048, 00:12:19.183 "data_size": 63488 00:12:19.183 }, 00:12:19.183 { 00:12:19.183 "name": "BaseBdev2", 00:12:19.183 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:19.183 "is_configured": true, 00:12:19.183 "data_offset": 2048, 00:12:19.183 "data_size": 63488 00:12:19.183 } 00:12:19.183 ] 00:12:19.183 }' 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.183 [2024-11-26 22:56:58.240338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.183 [2024-11-26 22:56:58.291923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:19.183 [2024-11-26 22:56:58.291998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.183 [2024-11-26 22:56:58.292016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.183 [2024-11-26 22:56:58.292030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:19.183 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.184 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.445 "name": "raid_bdev1", 00:12:19.445 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:19.445 "strip_size_kb": 0, 00:12:19.445 "state": "online", 00:12:19.445 "raid_level": "raid1", 00:12:19.445 "superblock": true, 00:12:19.445 "num_base_bdevs": 2, 00:12:19.445 "num_base_bdevs_discovered": 1, 00:12:19.445 "num_base_bdevs_operational": 1, 00:12:19.445 "base_bdevs_list": [ 00:12:19.445 { 00:12:19.445 "name": null, 00:12:19.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.445 "is_configured": false, 00:12:19.445 "data_offset": 0, 00:12:19.445 "data_size": 63488 00:12:19.445 }, 00:12:19.445 { 00:12:19.445 "name": "BaseBdev2", 00:12:19.445 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:19.445 "is_configured": true, 00:12:19.445 "data_offset": 2048, 00:12:19.445 "data_size": 63488 00:12:19.445 } 00:12:19.445 ] 00:12:19.445 }' 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.445 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.704 "name": "raid_bdev1", 00:12:19.704 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:19.704 "strip_size_kb": 0, 00:12:19.704 "state": "online", 00:12:19.704 "raid_level": "raid1", 00:12:19.704 "superblock": true, 00:12:19.704 "num_base_bdevs": 2, 00:12:19.704 "num_base_bdevs_discovered": 1, 00:12:19.704 "num_base_bdevs_operational": 1, 00:12:19.704 "base_bdevs_list": [ 00:12:19.704 { 00:12:19.704 "name": null, 00:12:19.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.704 "is_configured": false, 00:12:19.704 "data_offset": 0, 00:12:19.704 "data_size": 63488 00:12:19.704 }, 00:12:19.704 { 00:12:19.704 "name": "BaseBdev2", 00:12:19.704 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:19.704 "is_configured": true, 00:12:19.704 "data_offset": 2048, 00:12:19.704 "data_size": 63488 00:12:19.704 } 00:12:19.704 ] 00:12:19.704 }' 00:12:19.704 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.963 [2024-11-26 22:56:58.904636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.963 [2024-11-26 22:56:58.904711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.963 [2024-11-26 22:56:58.904736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:19.963 [2024-11-26 22:56:58.904750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.963 [2024-11-26 22:56:58.905283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.963 [2024-11-26 22:56:58.905313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.963 [2024-11-26 22:56:58.905406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:19.963 [2024-11-26 22:56:58.905431] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:19.963 [2024-11-26 22:56:58.905452] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:19.963 [2024-11-26 22:56:58.905479] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:19.963 BaseBdev1 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.963 22:56:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.919 "name": "raid_bdev1", 00:12:20.919 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:20.919 "strip_size_kb": 0, 00:12:20.919 "state": "online", 00:12:20.919 "raid_level": "raid1", 00:12:20.919 "superblock": true, 00:12:20.919 "num_base_bdevs": 2, 00:12:20.919 "num_base_bdevs_discovered": 1, 00:12:20.919 "num_base_bdevs_operational": 1, 00:12:20.919 "base_bdevs_list": [ 00:12:20.919 { 00:12:20.919 "name": null, 00:12:20.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.919 "is_configured": false, 00:12:20.919 "data_offset": 0, 00:12:20.919 "data_size": 63488 00:12:20.919 }, 00:12:20.919 { 00:12:20.919 "name": "BaseBdev2", 00:12:20.919 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:20.919 "is_configured": true, 00:12:20.919 "data_offset": 2048, 00:12:20.919 "data_size": 63488 00:12:20.919 } 00:12:20.919 ] 00:12:20.919 }' 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.919 22:56:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.488 "name": "raid_bdev1", 00:12:21.488 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:21.488 "strip_size_kb": 0, 00:12:21.488 "state": "online", 00:12:21.488 "raid_level": "raid1", 00:12:21.488 "superblock": true, 00:12:21.488 "num_base_bdevs": 2, 00:12:21.488 "num_base_bdevs_discovered": 1, 00:12:21.488 "num_base_bdevs_operational": 1, 00:12:21.488 "base_bdevs_list": [ 00:12:21.488 { 00:12:21.488 "name": null, 00:12:21.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.488 "is_configured": false, 00:12:21.488 "data_offset": 0, 00:12:21.488 "data_size": 63488 00:12:21.488 }, 00:12:21.488 { 00:12:21.488 "name": "BaseBdev2", 00:12:21.488 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:21.488 "is_configured": true, 00:12:21.488 "data_offset": 2048, 00:12:21.488 "data_size": 63488 00:12:21.488 } 00:12:21.488 ] 00:12:21.488 }' 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.488 [2024-11-26 22:57:00.509271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.488 [2024-11-26 22:57:00.509524] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:21.488 [2024-11-26 22:57:00.509548] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:21.488 request: 00:12:21.488 { 00:12:21.488 "base_bdev": "BaseBdev1", 00:12:21.488 "raid_bdev": "raid_bdev1", 00:12:21.488 "method": "bdev_raid_add_base_bdev", 00:12:21.488 "req_id": 1 00:12:21.488 } 00:12:21.488 Got JSON-RPC error response 00:12:21.488 response: 00:12:21.488 { 00:12:21.488 "code": -22, 00:12:21.488 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:21.488 } 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.488 22:57:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.428 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.697 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.697 "name": "raid_bdev1", 00:12:22.697 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:22.697 "strip_size_kb": 0, 00:12:22.697 "state": "online", 00:12:22.697 "raid_level": "raid1", 00:12:22.697 "superblock": true, 00:12:22.697 "num_base_bdevs": 2, 00:12:22.697 "num_base_bdevs_discovered": 1, 00:12:22.697 "num_base_bdevs_operational": 1, 00:12:22.697 "base_bdevs_list": [ 00:12:22.697 { 00:12:22.697 "name": null, 00:12:22.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.697 "is_configured": false, 00:12:22.697 "data_offset": 0, 00:12:22.697 "data_size": 63488 00:12:22.697 }, 00:12:22.697 { 00:12:22.697 "name": "BaseBdev2", 00:12:22.697 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:22.697 "is_configured": true, 00:12:22.697 "data_offset": 2048, 00:12:22.697 "data_size": 63488 00:12:22.697 } 00:12:22.697 ] 00:12:22.697 }' 00:12:22.697 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.697 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.973 22:57:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.973 "name": "raid_bdev1", 00:12:22.973 "uuid": "d989eb4b-b4e3-48d7-bdb2-2a739ffe45c2", 00:12:22.973 "strip_size_kb": 0, 00:12:22.973 "state": "online", 00:12:22.973 "raid_level": "raid1", 00:12:22.973 "superblock": true, 00:12:22.973 "num_base_bdevs": 2, 00:12:22.973 "num_base_bdevs_discovered": 1, 00:12:22.973 "num_base_bdevs_operational": 1, 00:12:22.973 "base_bdevs_list": [ 00:12:22.973 { 00:12:22.973 "name": null, 00:12:22.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.973 "is_configured": false, 00:12:22.973 "data_offset": 0, 00:12:22.973 "data_size": 63488 00:12:22.973 }, 00:12:22.973 { 00:12:22.973 "name": "BaseBdev2", 00:12:22.973 "uuid": "5dae242c-afa4-5b18-b7a0-89283fd0c839", 00:12:22.973 "is_configured": true, 00:12:22.973 "data_offset": 2048, 00:12:22.973 "data_size": 63488 00:12:22.973 } 00:12:22.973 ] 00:12:22.973 }' 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89157 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89157 ']' 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89157 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:22.973 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.232 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89157 00:12:23.232 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.232 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.232 killing process with pid 89157 00:12:23.232 Received shutdown signal, test time was about 16.564509 seconds 00:12:23.232 00:12:23.232 Latency(us) 00:12:23.232 [2024-11-26T22:57:02.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.232 [2024-11-26T22:57:02.360Z] =================================================================================================================== 00:12:23.232 [2024-11-26T22:57:02.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:23.233 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89157' 00:12:23.233 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89157 00:12:23.233 [2024-11-26 22:57:02.132331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.233 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89157 00:12:23.233 [2024-11-26 22:57:02.132501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.233 [2024-11-26 22:57:02.132574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.233 [2024-11-26 22:57:02.132585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:23.233 [2024-11-26 22:57:02.182367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:23.492 00:12:23.492 real 0m18.665s 00:12:23.492 user 0m24.620s 00:12:23.492 sys 0m2.328s 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.492 ************************************ 00:12:23.492 END TEST raid_rebuild_test_sb_io 00:12:23.492 ************************************ 00:12:23.492 22:57:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:23.492 22:57:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:23.492 22:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:23.492 22:57:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.492 22:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.492 ************************************ 00:12:23.492 START TEST raid_rebuild_test 00:12:23.492 ************************************ 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.492 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89830 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89830 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 89830 ']' 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.493 22:57:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.753 [2024-11-26 22:57:02.692911] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:23.753 [2024-11-26 22:57:02.693103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89830 ] 00:12:23.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:23.753 Zero copy mechanism will not be used. 00:12:23.753 [2024-11-26 22:57:02.829574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.753 [2024-11-26 22:57:02.868641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.012 [2024-11-26 22:57:02.908462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.012 [2024-11-26 22:57:02.987621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.012 [2024-11-26 22:57:02.987669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 BaseBdev1_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 [2024-11-26 22:57:03.536554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.583 [2024-11-26 22:57:03.536650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.583 [2024-11-26 22:57:03.536682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.583 [2024-11-26 22:57:03.536703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.583 [2024-11-26 22:57:03.539243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.583 [2024-11-26 22:57:03.539306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.583 BaseBdev1 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 BaseBdev2_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 [2024-11-26 22:57:03.571175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:24.583 [2024-11-26 22:57:03.571359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.583 [2024-11-26 22:57:03.571391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:24.583 [2024-11-26 22:57:03.571406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.583 [2024-11-26 22:57:03.573830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.583 [2024-11-26 22:57:03.573875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.583 BaseBdev2 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 BaseBdev3_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 [2024-11-26 22:57:03.605751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:24.583 [2024-11-26 22:57:03.605817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.583 [2024-11-26 22:57:03.605843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:24.583 [2024-11-26 22:57:03.605857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.583 [2024-11-26 22:57:03.608314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.583 [2024-11-26 22:57:03.608424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:24.583 BaseBdev3 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 BaseBdev4_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 [2024-11-26 22:57:03.656896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:24.583 [2024-11-26 22:57:03.657003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.583 [2024-11-26 22:57:03.657039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:24.583 [2024-11-26 22:57:03.657063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.583 [2024-11-26 22:57:03.660662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.583 [2024-11-26 22:57:03.660714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:24.583 BaseBdev4 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 spare_malloc 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 spare_delay 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.583 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.583 [2024-11-26 22:57:03.703649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:24.583 [2024-11-26 22:57:03.703805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.583 [2024-11-26 22:57:03.703830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:24.583 [2024-11-26 22:57:03.703844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.583 [2024-11-26 22:57:03.706278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.583 [2024-11-26 22:57:03.706323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:24.843 spare 00:12:24.843 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.843 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:24.843 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.843 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.843 [2024-11-26 22:57:03.715731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.844 [2024-11-26 22:57:03.717890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.844 [2024-11-26 22:57:03.717965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.844 [2024-11-26 22:57:03.718012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.844 [2024-11-26 22:57:03.718113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:24.844 [2024-11-26 22:57:03.718140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:24.844 [2024-11-26 22:57:03.718512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:24.844 [2024-11-26 22:57:03.718711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:24.844 [2024-11-26 22:57:03.718770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:24.844 [2024-11-26 22:57:03.718948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.844 "name": "raid_bdev1", 00:12:24.844 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:24.844 "strip_size_kb": 0, 00:12:24.844 "state": "online", 00:12:24.844 "raid_level": "raid1", 00:12:24.844 "superblock": false, 00:12:24.844 "num_base_bdevs": 4, 00:12:24.844 "num_base_bdevs_discovered": 4, 00:12:24.844 "num_base_bdevs_operational": 4, 00:12:24.844 "base_bdevs_list": [ 00:12:24.844 { 00:12:24.844 "name": "BaseBdev1", 00:12:24.844 "uuid": "ec01160c-9856-5438-9333-1cc587acb2d6", 00:12:24.844 "is_configured": true, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 65536 00:12:24.844 }, 00:12:24.844 { 00:12:24.844 "name": "BaseBdev2", 00:12:24.844 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:24.844 "is_configured": true, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 65536 00:12:24.844 }, 00:12:24.844 { 00:12:24.844 "name": "BaseBdev3", 00:12:24.844 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:24.844 "is_configured": true, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 65536 00:12:24.844 }, 00:12:24.844 { 00:12:24.844 "name": "BaseBdev4", 00:12:24.844 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:24.844 "is_configured": true, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 65536 00:12:24.844 } 00:12:24.844 ] 00:12:24.844 }' 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.844 22:57:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.104 [2024-11-26 22:57:04.156082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:25.104 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:25.365 [2024-11-26 22:57:04.387916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:25.365 /dev/nbd0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.365 1+0 records in 00:12:25.365 1+0 records out 00:12:25.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590372 s, 6.9 MB/s 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:25.365 22:57:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:31.945 65536+0 records in 00:12:31.945 65536+0 records out 00:12:31.945 33554432 bytes (34 MB, 32 MiB) copied, 5.33115 s, 6.3 MB/s 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.945 22:57:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.945 [2024-11-26 22:57:10.002772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 [2024-11-26 22:57:10.013402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.945 "name": "raid_bdev1", 00:12:31.945 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:31.945 "strip_size_kb": 0, 00:12:31.945 "state": "online", 00:12:31.945 "raid_level": "raid1", 00:12:31.945 "superblock": false, 00:12:31.945 "num_base_bdevs": 4, 00:12:31.945 "num_base_bdevs_discovered": 3, 00:12:31.945 "num_base_bdevs_operational": 3, 00:12:31.945 "base_bdevs_list": [ 00:12:31.945 { 00:12:31.945 "name": null, 00:12:31.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.945 "is_configured": false, 00:12:31.945 "data_offset": 0, 00:12:31.945 "data_size": 65536 00:12:31.945 }, 00:12:31.945 { 00:12:31.945 "name": "BaseBdev2", 00:12:31.945 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:31.945 "is_configured": true, 00:12:31.945 "data_offset": 0, 00:12:31.945 "data_size": 65536 00:12:31.945 }, 00:12:31.945 { 00:12:31.945 "name": "BaseBdev3", 00:12:31.945 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:31.945 "is_configured": true, 00:12:31.945 "data_offset": 0, 00:12:31.945 "data_size": 65536 00:12:31.945 }, 00:12:31.945 { 00:12:31.945 "name": "BaseBdev4", 00:12:31.945 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:31.945 "is_configured": true, 00:12:31.945 "data_offset": 0, 00:12:31.945 "data_size": 65536 00:12:31.945 } 00:12:31.945 ] 00:12:31.945 }' 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 [2024-11-26 22:57:10.445465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.945 [2024-11-26 22:57:10.452727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 22:57:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:31.945 [2024-11-26 22:57:10.454980] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.516 "name": "raid_bdev1", 00:12:32.516 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:32.516 "strip_size_kb": 0, 00:12:32.516 "state": "online", 00:12:32.516 "raid_level": "raid1", 00:12:32.516 "superblock": false, 00:12:32.516 "num_base_bdevs": 4, 00:12:32.516 "num_base_bdevs_discovered": 4, 00:12:32.516 "num_base_bdevs_operational": 4, 00:12:32.516 "process": { 00:12:32.516 "type": "rebuild", 00:12:32.516 "target": "spare", 00:12:32.516 "progress": { 00:12:32.516 "blocks": 20480, 00:12:32.516 "percent": 31 00:12:32.516 } 00:12:32.516 }, 00:12:32.516 "base_bdevs_list": [ 00:12:32.516 { 00:12:32.516 "name": "spare", 00:12:32.516 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:32.516 "is_configured": true, 00:12:32.516 "data_offset": 0, 00:12:32.516 "data_size": 65536 00:12:32.516 }, 00:12:32.516 { 00:12:32.516 "name": "BaseBdev2", 00:12:32.516 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:32.516 "is_configured": true, 00:12:32.516 "data_offset": 0, 00:12:32.516 "data_size": 65536 00:12:32.516 }, 00:12:32.516 { 00:12:32.516 "name": "BaseBdev3", 00:12:32.516 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:32.516 "is_configured": true, 00:12:32.516 "data_offset": 0, 00:12:32.516 "data_size": 65536 00:12:32.516 }, 00:12:32.516 { 00:12:32.516 "name": "BaseBdev4", 00:12:32.516 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:32.516 "is_configured": true, 00:12:32.516 "data_offset": 0, 00:12:32.516 "data_size": 65536 00:12:32.516 } 00:12:32.516 ] 00:12:32.516 }' 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.516 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.516 [2024-11-26 22:57:11.601500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.777 [2024-11-26 22:57:11.665463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.777 [2024-11-26 22:57:11.665586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.777 [2024-11-26 22:57:11.665609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.777 [2024-11-26 22:57:11.665622] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.777 "name": "raid_bdev1", 00:12:32.777 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:32.777 "strip_size_kb": 0, 00:12:32.777 "state": "online", 00:12:32.777 "raid_level": "raid1", 00:12:32.777 "superblock": false, 00:12:32.777 "num_base_bdevs": 4, 00:12:32.777 "num_base_bdevs_discovered": 3, 00:12:32.777 "num_base_bdevs_operational": 3, 00:12:32.777 "base_bdevs_list": [ 00:12:32.777 { 00:12:32.777 "name": null, 00:12:32.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.777 "is_configured": false, 00:12:32.777 "data_offset": 0, 00:12:32.777 "data_size": 65536 00:12:32.777 }, 00:12:32.777 { 00:12:32.777 "name": "BaseBdev2", 00:12:32.777 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:32.777 "is_configured": true, 00:12:32.777 "data_offset": 0, 00:12:32.777 "data_size": 65536 00:12:32.777 }, 00:12:32.777 { 00:12:32.777 "name": "BaseBdev3", 00:12:32.777 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:32.777 "is_configured": true, 00:12:32.777 "data_offset": 0, 00:12:32.777 "data_size": 65536 00:12:32.777 }, 00:12:32.777 { 00:12:32.777 "name": "BaseBdev4", 00:12:32.777 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:32.777 "is_configured": true, 00:12:32.777 "data_offset": 0, 00:12:32.777 "data_size": 65536 00:12:32.777 } 00:12:32.777 ] 00:12:32.777 }' 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.777 22:57:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.037 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.302 "name": "raid_bdev1", 00:12:33.302 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:33.302 "strip_size_kb": 0, 00:12:33.302 "state": "online", 00:12:33.302 "raid_level": "raid1", 00:12:33.302 "superblock": false, 00:12:33.302 "num_base_bdevs": 4, 00:12:33.302 "num_base_bdevs_discovered": 3, 00:12:33.302 "num_base_bdevs_operational": 3, 00:12:33.302 "base_bdevs_list": [ 00:12:33.302 { 00:12:33.302 "name": null, 00:12:33.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.302 "is_configured": false, 00:12:33.302 "data_offset": 0, 00:12:33.302 "data_size": 65536 00:12:33.302 }, 00:12:33.302 { 00:12:33.302 "name": "BaseBdev2", 00:12:33.302 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:33.302 "is_configured": true, 00:12:33.302 "data_offset": 0, 00:12:33.302 "data_size": 65536 00:12:33.302 }, 00:12:33.302 { 00:12:33.302 "name": "BaseBdev3", 00:12:33.302 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:33.302 "is_configured": true, 00:12:33.302 "data_offset": 0, 00:12:33.302 "data_size": 65536 00:12:33.302 }, 00:12:33.302 { 00:12:33.302 "name": "BaseBdev4", 00:12:33.302 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:33.302 "is_configured": true, 00:12:33.302 "data_offset": 0, 00:12:33.302 "data_size": 65536 00:12:33.302 } 00:12:33.302 ] 00:12:33.302 }' 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.302 [2024-11-26 22:57:12.276818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.302 [2024-11-26 22:57:12.283622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.302 22:57:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:33.302 [2024-11-26 22:57:12.285894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.243 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.243 "name": "raid_bdev1", 00:12:34.243 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:34.243 "strip_size_kb": 0, 00:12:34.243 "state": "online", 00:12:34.243 "raid_level": "raid1", 00:12:34.243 "superblock": false, 00:12:34.243 "num_base_bdevs": 4, 00:12:34.243 "num_base_bdevs_discovered": 4, 00:12:34.243 "num_base_bdevs_operational": 4, 00:12:34.243 "process": { 00:12:34.243 "type": "rebuild", 00:12:34.243 "target": "spare", 00:12:34.243 "progress": { 00:12:34.243 "blocks": 20480, 00:12:34.243 "percent": 31 00:12:34.243 } 00:12:34.243 }, 00:12:34.243 "base_bdevs_list": [ 00:12:34.243 { 00:12:34.244 "name": "spare", 00:12:34.244 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:34.244 "is_configured": true, 00:12:34.244 "data_offset": 0, 00:12:34.244 "data_size": 65536 00:12:34.244 }, 00:12:34.244 { 00:12:34.244 "name": "BaseBdev2", 00:12:34.244 "uuid": "71a33754-a643-516f-8b31-3afed8a12a33", 00:12:34.244 "is_configured": true, 00:12:34.244 "data_offset": 0, 00:12:34.244 "data_size": 65536 00:12:34.244 }, 00:12:34.244 { 00:12:34.244 "name": "BaseBdev3", 00:12:34.244 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:34.244 "is_configured": true, 00:12:34.244 "data_offset": 0, 00:12:34.244 "data_size": 65536 00:12:34.244 }, 00:12:34.244 { 00:12:34.244 "name": "BaseBdev4", 00:12:34.244 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:34.244 "is_configured": true, 00:12:34.244 "data_offset": 0, 00:12:34.244 "data_size": 65536 00:12:34.244 } 00:12:34.244 ] 00:12:34.244 }' 00:12:34.244 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.503 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.503 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.503 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.503 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.504 [2024-11-26 22:57:13.448044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.504 [2024-11-26 22:57:13.495656] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.504 "name": "raid_bdev1", 00:12:34.504 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:34.504 "strip_size_kb": 0, 00:12:34.504 "state": "online", 00:12:34.504 "raid_level": "raid1", 00:12:34.504 "superblock": false, 00:12:34.504 "num_base_bdevs": 4, 00:12:34.504 "num_base_bdevs_discovered": 3, 00:12:34.504 "num_base_bdevs_operational": 3, 00:12:34.504 "process": { 00:12:34.504 "type": "rebuild", 00:12:34.504 "target": "spare", 00:12:34.504 "progress": { 00:12:34.504 "blocks": 24576, 00:12:34.504 "percent": 37 00:12:34.504 } 00:12:34.504 }, 00:12:34.504 "base_bdevs_list": [ 00:12:34.504 { 00:12:34.504 "name": "spare", 00:12:34.504 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 0, 00:12:34.504 "data_size": 65536 00:12:34.504 }, 00:12:34.504 { 00:12:34.504 "name": null, 00:12:34.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.504 "is_configured": false, 00:12:34.504 "data_offset": 0, 00:12:34.504 "data_size": 65536 00:12:34.504 }, 00:12:34.504 { 00:12:34.504 "name": "BaseBdev3", 00:12:34.504 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 0, 00:12:34.504 "data_size": 65536 00:12:34.504 }, 00:12:34.504 { 00:12:34.504 "name": "BaseBdev4", 00:12:34.504 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 0, 00:12:34.504 "data_size": 65536 00:12:34.504 } 00:12:34.504 ] 00:12:34.504 }' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.504 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=362 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.764 "name": "raid_bdev1", 00:12:34.764 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:34.764 "strip_size_kb": 0, 00:12:34.764 "state": "online", 00:12:34.764 "raid_level": "raid1", 00:12:34.764 "superblock": false, 00:12:34.764 "num_base_bdevs": 4, 00:12:34.764 "num_base_bdevs_discovered": 3, 00:12:34.764 "num_base_bdevs_operational": 3, 00:12:34.764 "process": { 00:12:34.764 "type": "rebuild", 00:12:34.764 "target": "spare", 00:12:34.764 "progress": { 00:12:34.764 "blocks": 26624, 00:12:34.764 "percent": 40 00:12:34.764 } 00:12:34.764 }, 00:12:34.764 "base_bdevs_list": [ 00:12:34.764 { 00:12:34.764 "name": "spare", 00:12:34.764 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": null, 00:12:34.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.764 "is_configured": false, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": "BaseBdev3", 00:12:34.764 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": "BaseBdev4", 00:12:34.764 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 } 00:12:34.764 ] 00:12:34.764 }' 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.764 22:57:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.702 "name": "raid_bdev1", 00:12:35.702 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:35.702 "strip_size_kb": 0, 00:12:35.702 "state": "online", 00:12:35.702 "raid_level": "raid1", 00:12:35.702 "superblock": false, 00:12:35.702 "num_base_bdevs": 4, 00:12:35.702 "num_base_bdevs_discovered": 3, 00:12:35.702 "num_base_bdevs_operational": 3, 00:12:35.702 "process": { 00:12:35.702 "type": "rebuild", 00:12:35.702 "target": "spare", 00:12:35.702 "progress": { 00:12:35.702 "blocks": 49152, 00:12:35.702 "percent": 75 00:12:35.702 } 00:12:35.702 }, 00:12:35.702 "base_bdevs_list": [ 00:12:35.702 { 00:12:35.702 "name": "spare", 00:12:35.702 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:35.702 "is_configured": true, 00:12:35.702 "data_offset": 0, 00:12:35.702 "data_size": 65536 00:12:35.702 }, 00:12:35.702 { 00:12:35.702 "name": null, 00:12:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.702 "is_configured": false, 00:12:35.702 "data_offset": 0, 00:12:35.702 "data_size": 65536 00:12:35.702 }, 00:12:35.702 { 00:12:35.702 "name": "BaseBdev3", 00:12:35.702 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:35.702 "is_configured": true, 00:12:35.702 "data_offset": 0, 00:12:35.702 "data_size": 65536 00:12:35.702 }, 00:12:35.702 { 00:12:35.702 "name": "BaseBdev4", 00:12:35.702 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:35.702 "is_configured": true, 00:12:35.702 "data_offset": 0, 00:12:35.702 "data_size": 65536 00:12:35.702 } 00:12:35.702 ] 00:12:35.702 }' 00:12:35.702 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.962 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.962 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.962 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.962 22:57:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.530 [2024-11-26 22:57:15.512379] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:36.530 [2024-11-26 22:57:15.512517] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:36.530 [2024-11-26 22:57:15.512567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.797 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.056 "name": "raid_bdev1", 00:12:37.056 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:37.056 "strip_size_kb": 0, 00:12:37.056 "state": "online", 00:12:37.056 "raid_level": "raid1", 00:12:37.056 "superblock": false, 00:12:37.056 "num_base_bdevs": 4, 00:12:37.056 "num_base_bdevs_discovered": 3, 00:12:37.056 "num_base_bdevs_operational": 3, 00:12:37.056 "base_bdevs_list": [ 00:12:37.056 { 00:12:37.056 "name": "spare", 00:12:37.056 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 0, 00:12:37.056 "data_size": 65536 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "name": null, 00:12:37.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.056 "is_configured": false, 00:12:37.056 "data_offset": 0, 00:12:37.056 "data_size": 65536 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "name": "BaseBdev3", 00:12:37.056 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 0, 00:12:37.056 "data_size": 65536 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "name": "BaseBdev4", 00:12:37.056 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 0, 00:12:37.056 "data_size": 65536 00:12:37.056 } 00:12:37.056 ] 00:12:37.056 }' 00:12:37.056 22:57:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.056 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.056 "name": "raid_bdev1", 00:12:37.056 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:37.056 "strip_size_kb": 0, 00:12:37.057 "state": "online", 00:12:37.057 "raid_level": "raid1", 00:12:37.057 "superblock": false, 00:12:37.057 "num_base_bdevs": 4, 00:12:37.057 "num_base_bdevs_discovered": 3, 00:12:37.057 "num_base_bdevs_operational": 3, 00:12:37.057 "base_bdevs_list": [ 00:12:37.057 { 00:12:37.057 "name": "spare", 00:12:37.057 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:37.057 "is_configured": true, 00:12:37.057 "data_offset": 0, 00:12:37.057 "data_size": 65536 00:12:37.057 }, 00:12:37.057 { 00:12:37.057 "name": null, 00:12:37.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.057 "is_configured": false, 00:12:37.057 "data_offset": 0, 00:12:37.057 "data_size": 65536 00:12:37.057 }, 00:12:37.057 { 00:12:37.057 "name": "BaseBdev3", 00:12:37.057 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:37.057 "is_configured": true, 00:12:37.057 "data_offset": 0, 00:12:37.057 "data_size": 65536 00:12:37.057 }, 00:12:37.057 { 00:12:37.057 "name": "BaseBdev4", 00:12:37.057 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:37.057 "is_configured": true, 00:12:37.057 "data_offset": 0, 00:12:37.057 "data_size": 65536 00:12:37.057 } 00:12:37.057 ] 00:12:37.057 }' 00:12:37.057 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.057 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.057 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.318 "name": "raid_bdev1", 00:12:37.318 "uuid": "974b823a-d7c3-4f5a-8321-acd78dbc9ee7", 00:12:37.318 "strip_size_kb": 0, 00:12:37.318 "state": "online", 00:12:37.318 "raid_level": "raid1", 00:12:37.318 "superblock": false, 00:12:37.318 "num_base_bdevs": 4, 00:12:37.318 "num_base_bdevs_discovered": 3, 00:12:37.318 "num_base_bdevs_operational": 3, 00:12:37.318 "base_bdevs_list": [ 00:12:37.318 { 00:12:37.318 "name": "spare", 00:12:37.318 "uuid": "4a4fcb5c-7a91-599d-8db7-e104d8f713a7", 00:12:37.318 "is_configured": true, 00:12:37.318 "data_offset": 0, 00:12:37.318 "data_size": 65536 00:12:37.318 }, 00:12:37.318 { 00:12:37.318 "name": null, 00:12:37.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.318 "is_configured": false, 00:12:37.318 "data_offset": 0, 00:12:37.318 "data_size": 65536 00:12:37.318 }, 00:12:37.318 { 00:12:37.318 "name": "BaseBdev3", 00:12:37.318 "uuid": "875b550f-3287-5512-bb30-d2b255db6b82", 00:12:37.318 "is_configured": true, 00:12:37.318 "data_offset": 0, 00:12:37.318 "data_size": 65536 00:12:37.318 }, 00:12:37.318 { 00:12:37.318 "name": "BaseBdev4", 00:12:37.318 "uuid": "4ba9d5d5-a64b-59ee-93d1-62882c2327c2", 00:12:37.318 "is_configured": true, 00:12:37.318 "data_offset": 0, 00:12:37.318 "data_size": 65536 00:12:37.318 } 00:12:37.318 ] 00:12:37.318 }' 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.318 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.577 [2024-11-26 22:57:16.660373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.577 [2024-11-26 22:57:16.660456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.577 [2024-11-26 22:57:16.660577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.577 [2024-11-26 22:57:16.660698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.577 [2024-11-26 22:57:16.660755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:37.577 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:37.837 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:37.838 /dev/nbd0 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.838 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.838 1+0 records in 00:12:37.838 1+0 records out 00:12:37.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052342 s, 7.8 MB/s 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.097 22:57:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:38.098 /dev/nbd1 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.098 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.098 1+0 records in 00:12:38.098 1+0 records out 00:12:38.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045216 s, 9.1 MB/s 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.357 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.617 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89830 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 89830 ']' 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 89830 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89830 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89830' 00:12:38.877 killing process with pid 89830 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 89830 00:12:38.877 Received shutdown signal, test time was about 60.000000 seconds 00:12:38.877 00:12:38.877 Latency(us) 00:12:38.877 [2024-11-26T22:57:18.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.877 [2024-11-26T22:57:18.005Z] =================================================================================================================== 00:12:38.877 [2024-11-26T22:57:18.005Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.877 [2024-11-26 22:57:17.809767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.877 22:57:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 89830 00:12:38.877 [2024-11-26 22:57:17.900169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.137 22:57:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:39.137 00:12:39.137 real 0m15.629s 00:12:39.137 user 0m17.534s 00:12:39.137 sys 0m3.248s 00:12:39.137 22:57:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.137 22:57:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.137 ************************************ 00:12:39.137 END TEST raid_rebuild_test 00:12:39.137 ************************************ 00:12:39.398 22:57:18 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:39.398 22:57:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.398 22:57:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.398 22:57:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.398 ************************************ 00:12:39.398 START TEST raid_rebuild_test_sb 00:12:39.398 ************************************ 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90255 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90255 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90255 ']' 00:12:39.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.398 22:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.399 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.399 Zero copy mechanism will not be used. 00:12:39.399 [2024-11-26 22:57:18.392987] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:39.399 [2024-11-26 22:57:18.393129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90255 ] 00:12:39.657 [2024-11-26 22:57:18.527973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:39.657 [2024-11-26 22:57:18.567068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.657 [2024-11-26 22:57:18.605303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.657 [2024-11-26 22:57:18.681363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.657 [2024-11-26 22:57:18.681397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 BaseBdev1_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 [2024-11-26 22:57:19.226774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.227 [2024-11-26 22:57:19.226933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.227 [2024-11-26 22:57:19.226987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.227 [2024-11-26 22:57:19.227055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.227 [2024-11-26 22:57:19.229473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.227 [2024-11-26 22:57:19.229541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.227 BaseBdev1 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 BaseBdev2_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 [2024-11-26 22:57:19.261097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.227 [2024-11-26 22:57:19.261151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.227 [2024-11-26 22:57:19.261171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.227 [2024-11-26 22:57:19.261182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.227 [2024-11-26 22:57:19.263545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.227 [2024-11-26 22:57:19.263581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.227 BaseBdev2 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 BaseBdev3_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 [2024-11-26 22:57:19.295423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:40.227 [2024-11-26 22:57:19.295474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.227 [2024-11-26 22:57:19.295495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.227 [2024-11-26 22:57:19.295506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.227 [2024-11-26 22:57:19.297809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.227 [2024-11-26 22:57:19.297845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:40.227 BaseBdev3 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 BaseBdev4_malloc 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.227 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.227 [2024-11-26 22:57:19.351730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:40.227 [2024-11-26 22:57:19.351911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.227 [2024-11-26 22:57:19.351972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:40.227 [2024-11-26 22:57:19.352029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.487 [2024-11-26 22:57:19.355538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.487 [2024-11-26 22:57:19.355647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:40.487 BaseBdev4 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.487 spare_malloc 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.487 spare_delay 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.487 [2024-11-26 22:57:19.398459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.487 [2024-11-26 22:57:19.398566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.487 [2024-11-26 22:57:19.398599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:40.487 [2024-11-26 22:57:19.398628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.487 [2024-11-26 22:57:19.400926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.487 [2024-11-26 22:57:19.401002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.487 spare 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.487 [2024-11-26 22:57:19.410562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.487 [2024-11-26 22:57:19.412708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.487 [2024-11-26 22:57:19.412812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.487 [2024-11-26 22:57:19.412889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:40.487 [2024-11-26 22:57:19.413088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:40.487 [2024-11-26 22:57:19.413153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.487 [2024-11-26 22:57:19.413437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.487 [2024-11-26 22:57:19.413659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:40.487 [2024-11-26 22:57:19.413708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:40.487 [2024-11-26 22:57:19.413855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.487 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.487 "name": "raid_bdev1", 00:12:40.487 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:40.487 "strip_size_kb": 0, 00:12:40.487 "state": "online", 00:12:40.487 "raid_level": "raid1", 00:12:40.487 "superblock": true, 00:12:40.487 "num_base_bdevs": 4, 00:12:40.488 "num_base_bdevs_discovered": 4, 00:12:40.488 "num_base_bdevs_operational": 4, 00:12:40.488 "base_bdevs_list": [ 00:12:40.488 { 00:12:40.488 "name": "BaseBdev1", 00:12:40.488 "uuid": "0711cd47-a8ef-537f-8f99-40cc35246daa", 00:12:40.488 "is_configured": true, 00:12:40.488 "data_offset": 2048, 00:12:40.488 "data_size": 63488 00:12:40.488 }, 00:12:40.488 { 00:12:40.488 "name": "BaseBdev2", 00:12:40.488 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:40.488 "is_configured": true, 00:12:40.488 "data_offset": 2048, 00:12:40.488 "data_size": 63488 00:12:40.488 }, 00:12:40.488 { 00:12:40.488 "name": "BaseBdev3", 00:12:40.488 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:40.488 "is_configured": true, 00:12:40.488 "data_offset": 2048, 00:12:40.488 "data_size": 63488 00:12:40.488 }, 00:12:40.488 { 00:12:40.488 "name": "BaseBdev4", 00:12:40.488 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:40.488 "is_configured": true, 00:12:40.488 "data_offset": 2048, 00:12:40.488 "data_size": 63488 00:12:40.488 } 00:12:40.488 ] 00:12:40.488 }' 00:12:40.488 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.488 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.747 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.747 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.747 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.747 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.747 [2024-11-26 22:57:19.838988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.747 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.007 22:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:41.007 [2024-11-26 22:57:20.110789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:41.007 /dev/nbd0 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.267 1+0 records in 00:12:41.267 1+0 records out 00:12:41.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390066 s, 10.5 MB/s 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:41.267 22:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:46.547 63488+0 records in 00:12:46.547 63488+0 records out 00:12:46.547 32505856 bytes (33 MB, 31 MiB) copied, 5.01968 s, 6.5 MB/s 00:12:46.547 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:46.547 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.547 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:46.547 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.547 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.548 [2024-11-26 22:57:25.415353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.548 [2024-11-26 22:57:25.427479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.548 "name": "raid_bdev1", 00:12:46.548 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:46.548 "strip_size_kb": 0, 00:12:46.548 "state": "online", 00:12:46.548 "raid_level": "raid1", 00:12:46.548 "superblock": true, 00:12:46.548 "num_base_bdevs": 4, 00:12:46.548 "num_base_bdevs_discovered": 3, 00:12:46.548 "num_base_bdevs_operational": 3, 00:12:46.548 "base_bdevs_list": [ 00:12:46.548 { 00:12:46.548 "name": null, 00:12:46.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.548 "is_configured": false, 00:12:46.548 "data_offset": 0, 00:12:46.548 "data_size": 63488 00:12:46.548 }, 00:12:46.548 { 00:12:46.548 "name": "BaseBdev2", 00:12:46.548 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:46.548 "is_configured": true, 00:12:46.548 "data_offset": 2048, 00:12:46.548 "data_size": 63488 00:12:46.548 }, 00:12:46.548 { 00:12:46.548 "name": "BaseBdev3", 00:12:46.548 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:46.548 "is_configured": true, 00:12:46.548 "data_offset": 2048, 00:12:46.548 "data_size": 63488 00:12:46.548 }, 00:12:46.548 { 00:12:46.548 "name": "BaseBdev4", 00:12:46.548 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:46.548 "is_configured": true, 00:12:46.548 "data_offset": 2048, 00:12:46.548 "data_size": 63488 00:12:46.548 } 00:12:46.548 ] 00:12:46.548 }' 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.548 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.809 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.809 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.809 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.809 [2024-11-26 22:57:25.899508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.809 [2024-11-26 22:57:25.906875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:12:46.809 22:57:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.809 22:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:46.809 [2024-11-26 22:57:25.909157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.191 "name": "raid_bdev1", 00:12:48.191 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:48.191 "strip_size_kb": 0, 00:12:48.191 "state": "online", 00:12:48.191 "raid_level": "raid1", 00:12:48.191 "superblock": true, 00:12:48.191 "num_base_bdevs": 4, 00:12:48.191 "num_base_bdevs_discovered": 4, 00:12:48.191 "num_base_bdevs_operational": 4, 00:12:48.191 "process": { 00:12:48.191 "type": "rebuild", 00:12:48.191 "target": "spare", 00:12:48.191 "progress": { 00:12:48.191 "blocks": 20480, 00:12:48.191 "percent": 32 00:12:48.191 } 00:12:48.191 }, 00:12:48.191 "base_bdevs_list": [ 00:12:48.191 { 00:12:48.191 "name": "spare", 00:12:48.191 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:48.191 "is_configured": true, 00:12:48.191 "data_offset": 2048, 00:12:48.191 "data_size": 63488 00:12:48.191 }, 00:12:48.191 { 00:12:48.191 "name": "BaseBdev2", 00:12:48.191 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:48.191 "is_configured": true, 00:12:48.191 "data_offset": 2048, 00:12:48.191 "data_size": 63488 00:12:48.191 }, 00:12:48.191 { 00:12:48.191 "name": "BaseBdev3", 00:12:48.191 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:48.191 "is_configured": true, 00:12:48.191 "data_offset": 2048, 00:12:48.191 "data_size": 63488 00:12:48.191 }, 00:12:48.191 { 00:12:48.191 "name": "BaseBdev4", 00:12:48.191 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:48.191 "is_configured": true, 00:12:48.191 "data_offset": 2048, 00:12:48.191 "data_size": 63488 00:12:48.191 } 00:12:48.191 ] 00:12:48.191 }' 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.191 22:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.191 [2024-11-26 22:57:27.027524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.191 [2024-11-26 22:57:27.119503] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.191 [2024-11-26 22:57:27.119571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.191 [2024-11-26 22:57:27.119589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.191 [2024-11-26 22:57:27.119602] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.191 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.192 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.192 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.192 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.192 "name": "raid_bdev1", 00:12:48.192 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:48.192 "strip_size_kb": 0, 00:12:48.192 "state": "online", 00:12:48.192 "raid_level": "raid1", 00:12:48.192 "superblock": true, 00:12:48.192 "num_base_bdevs": 4, 00:12:48.192 "num_base_bdevs_discovered": 3, 00:12:48.192 "num_base_bdevs_operational": 3, 00:12:48.192 "base_bdevs_list": [ 00:12:48.192 { 00:12:48.192 "name": null, 00:12:48.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.192 "is_configured": false, 00:12:48.192 "data_offset": 0, 00:12:48.192 "data_size": 63488 00:12:48.192 }, 00:12:48.192 { 00:12:48.192 "name": "BaseBdev2", 00:12:48.192 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:48.192 "is_configured": true, 00:12:48.192 "data_offset": 2048, 00:12:48.192 "data_size": 63488 00:12:48.192 }, 00:12:48.192 { 00:12:48.192 "name": "BaseBdev3", 00:12:48.192 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:48.192 "is_configured": true, 00:12:48.192 "data_offset": 2048, 00:12:48.192 "data_size": 63488 00:12:48.192 }, 00:12:48.192 { 00:12:48.192 "name": "BaseBdev4", 00:12:48.192 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:48.192 "is_configured": true, 00:12:48.192 "data_offset": 2048, 00:12:48.192 "data_size": 63488 00:12:48.192 } 00:12:48.192 ] 00:12:48.192 }' 00:12:48.192 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.192 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.451 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.711 "name": "raid_bdev1", 00:12:48.711 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:48.711 "strip_size_kb": 0, 00:12:48.711 "state": "online", 00:12:48.711 "raid_level": "raid1", 00:12:48.711 "superblock": true, 00:12:48.711 "num_base_bdevs": 4, 00:12:48.711 "num_base_bdevs_discovered": 3, 00:12:48.711 "num_base_bdevs_operational": 3, 00:12:48.711 "base_bdevs_list": [ 00:12:48.711 { 00:12:48.711 "name": null, 00:12:48.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.711 "is_configured": false, 00:12:48.711 "data_offset": 0, 00:12:48.711 "data_size": 63488 00:12:48.711 }, 00:12:48.711 { 00:12:48.711 "name": "BaseBdev2", 00:12:48.711 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:48.711 "is_configured": true, 00:12:48.711 "data_offset": 2048, 00:12:48.711 "data_size": 63488 00:12:48.711 }, 00:12:48.711 { 00:12:48.711 "name": "BaseBdev3", 00:12:48.711 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:48.711 "is_configured": true, 00:12:48.711 "data_offset": 2048, 00:12:48.711 "data_size": 63488 00:12:48.711 }, 00:12:48.711 { 00:12:48.711 "name": "BaseBdev4", 00:12:48.711 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:48.711 "is_configured": true, 00:12:48.711 "data_offset": 2048, 00:12:48.711 "data_size": 63488 00:12:48.711 } 00:12:48.711 ] 00:12:48.711 }' 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.711 [2024-11-26 22:57:27.714679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.711 [2024-11-26 22:57:27.721195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.711 22:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:48.711 [2024-11-26 22:57:27.723377] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.649 22:57:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.910 "name": "raid_bdev1", 00:12:49.910 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:49.910 "strip_size_kb": 0, 00:12:49.910 "state": "online", 00:12:49.910 "raid_level": "raid1", 00:12:49.910 "superblock": true, 00:12:49.910 "num_base_bdevs": 4, 00:12:49.910 "num_base_bdevs_discovered": 4, 00:12:49.910 "num_base_bdevs_operational": 4, 00:12:49.910 "process": { 00:12:49.910 "type": "rebuild", 00:12:49.910 "target": "spare", 00:12:49.910 "progress": { 00:12:49.910 "blocks": 20480, 00:12:49.910 "percent": 32 00:12:49.910 } 00:12:49.910 }, 00:12:49.910 "base_bdevs_list": [ 00:12:49.910 { 00:12:49.910 "name": "spare", 00:12:49.910 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:49.910 "is_configured": true, 00:12:49.910 "data_offset": 2048, 00:12:49.910 "data_size": 63488 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "name": "BaseBdev2", 00:12:49.910 "uuid": "0a956149-f5a5-554a-9fe6-5ed1004f0ed1", 00:12:49.910 "is_configured": true, 00:12:49.910 "data_offset": 2048, 00:12:49.910 "data_size": 63488 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "name": "BaseBdev3", 00:12:49.910 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:49.910 "is_configured": true, 00:12:49.910 "data_offset": 2048, 00:12:49.910 "data_size": 63488 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "name": "BaseBdev4", 00:12:49.910 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:49.910 "is_configured": true, 00:12:49.910 "data_offset": 2048, 00:12:49.910 "data_size": 63488 00:12:49.910 } 00:12:49.910 ] 00:12:49.910 }' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:49.910 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.910 22:57:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.910 [2024-11-26 22:57:28.882092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.910 [2024-11-26 22:57:29.033012] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:12:49.910 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.910 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.170 "name": "raid_bdev1", 00:12:50.170 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:50.170 "strip_size_kb": 0, 00:12:50.170 "state": "online", 00:12:50.170 "raid_level": "raid1", 00:12:50.170 "superblock": true, 00:12:50.170 "num_base_bdevs": 4, 00:12:50.170 "num_base_bdevs_discovered": 3, 00:12:50.170 "num_base_bdevs_operational": 3, 00:12:50.170 "process": { 00:12:50.170 "type": "rebuild", 00:12:50.170 "target": "spare", 00:12:50.170 "progress": { 00:12:50.170 "blocks": 24576, 00:12:50.170 "percent": 38 00:12:50.170 } 00:12:50.170 }, 00:12:50.170 "base_bdevs_list": [ 00:12:50.170 { 00:12:50.170 "name": "spare", 00:12:50.170 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:50.170 "is_configured": true, 00:12:50.170 "data_offset": 2048, 00:12:50.170 "data_size": 63488 00:12:50.170 }, 00:12:50.170 { 00:12:50.170 "name": null, 00:12:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.170 "is_configured": false, 00:12:50.170 "data_offset": 0, 00:12:50.170 "data_size": 63488 00:12:50.170 }, 00:12:50.170 { 00:12:50.170 "name": "BaseBdev3", 00:12:50.170 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:50.170 "is_configured": true, 00:12:50.170 "data_offset": 2048, 00:12:50.170 "data_size": 63488 00:12:50.170 }, 00:12:50.170 { 00:12:50.170 "name": "BaseBdev4", 00:12:50.170 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:50.170 "is_configured": true, 00:12:50.170 "data_offset": 2048, 00:12:50.170 "data_size": 63488 00:12:50.170 } 00:12:50.170 ] 00:12:50.170 }' 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.170 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.171 "name": "raid_bdev1", 00:12:50.171 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:50.171 "strip_size_kb": 0, 00:12:50.171 "state": "online", 00:12:50.171 "raid_level": "raid1", 00:12:50.171 "superblock": true, 00:12:50.171 "num_base_bdevs": 4, 00:12:50.171 "num_base_bdevs_discovered": 3, 00:12:50.171 "num_base_bdevs_operational": 3, 00:12:50.171 "process": { 00:12:50.171 "type": "rebuild", 00:12:50.171 "target": "spare", 00:12:50.171 "progress": { 00:12:50.171 "blocks": 26624, 00:12:50.171 "percent": 41 00:12:50.171 } 00:12:50.171 }, 00:12:50.171 "base_bdevs_list": [ 00:12:50.171 { 00:12:50.171 "name": "spare", 00:12:50.171 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:50.171 "is_configured": true, 00:12:50.171 "data_offset": 2048, 00:12:50.171 "data_size": 63488 00:12:50.171 }, 00:12:50.171 { 00:12:50.171 "name": null, 00:12:50.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.171 "is_configured": false, 00:12:50.171 "data_offset": 0, 00:12:50.171 "data_size": 63488 00:12:50.171 }, 00:12:50.171 { 00:12:50.171 "name": "BaseBdev3", 00:12:50.171 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:50.171 "is_configured": true, 00:12:50.171 "data_offset": 2048, 00:12:50.171 "data_size": 63488 00:12:50.171 }, 00:12:50.171 { 00:12:50.171 "name": "BaseBdev4", 00:12:50.171 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:50.171 "is_configured": true, 00:12:50.171 "data_offset": 2048, 00:12:50.171 "data_size": 63488 00:12:50.171 } 00:12:50.171 ] 00:12:50.171 }' 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.171 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.430 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.430 22:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.369 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.369 "name": "raid_bdev1", 00:12:51.370 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:51.370 "strip_size_kb": 0, 00:12:51.370 "state": "online", 00:12:51.370 "raid_level": "raid1", 00:12:51.370 "superblock": true, 00:12:51.370 "num_base_bdevs": 4, 00:12:51.370 "num_base_bdevs_discovered": 3, 00:12:51.370 "num_base_bdevs_operational": 3, 00:12:51.370 "process": { 00:12:51.370 "type": "rebuild", 00:12:51.370 "target": "spare", 00:12:51.370 "progress": { 00:12:51.370 "blocks": 49152, 00:12:51.370 "percent": 77 00:12:51.370 } 00:12:51.370 }, 00:12:51.370 "base_bdevs_list": [ 00:12:51.370 { 00:12:51.370 "name": "spare", 00:12:51.370 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:51.370 "is_configured": true, 00:12:51.370 "data_offset": 2048, 00:12:51.370 "data_size": 63488 00:12:51.370 }, 00:12:51.370 { 00:12:51.370 "name": null, 00:12:51.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.370 "is_configured": false, 00:12:51.370 "data_offset": 0, 00:12:51.370 "data_size": 63488 00:12:51.370 }, 00:12:51.370 { 00:12:51.370 "name": "BaseBdev3", 00:12:51.370 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:51.370 "is_configured": true, 00:12:51.370 "data_offset": 2048, 00:12:51.370 "data_size": 63488 00:12:51.370 }, 00:12:51.370 { 00:12:51.370 "name": "BaseBdev4", 00:12:51.370 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:51.370 "is_configured": true, 00:12:51.370 "data_offset": 2048, 00:12:51.370 "data_size": 63488 00:12:51.370 } 00:12:51.370 ] 00:12:51.370 }' 00:12:51.370 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.370 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.370 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.370 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.370 22:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.940 [2024-11-26 22:57:30.948372] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:51.940 [2024-11-26 22:57:30.948448] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:51.940 [2024-11-26 22:57:30.948555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.509 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.509 "name": "raid_bdev1", 00:12:52.510 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:52.510 "strip_size_kb": 0, 00:12:52.510 "state": "online", 00:12:52.510 "raid_level": "raid1", 00:12:52.510 "superblock": true, 00:12:52.510 "num_base_bdevs": 4, 00:12:52.510 "num_base_bdevs_discovered": 3, 00:12:52.510 "num_base_bdevs_operational": 3, 00:12:52.510 "base_bdevs_list": [ 00:12:52.510 { 00:12:52.510 "name": "spare", 00:12:52.510 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:52.510 "is_configured": true, 00:12:52.510 "data_offset": 2048, 00:12:52.510 "data_size": 63488 00:12:52.510 }, 00:12:52.510 { 00:12:52.510 "name": null, 00:12:52.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.510 "is_configured": false, 00:12:52.510 "data_offset": 0, 00:12:52.510 "data_size": 63488 00:12:52.510 }, 00:12:52.510 { 00:12:52.510 "name": "BaseBdev3", 00:12:52.510 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:52.510 "is_configured": true, 00:12:52.510 "data_offset": 2048, 00:12:52.510 "data_size": 63488 00:12:52.510 }, 00:12:52.510 { 00:12:52.510 "name": "BaseBdev4", 00:12:52.510 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:52.510 "is_configured": true, 00:12:52.510 "data_offset": 2048, 00:12:52.510 "data_size": 63488 00:12:52.510 } 00:12:52.510 ] 00:12:52.510 }' 00:12:52.510 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.510 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:52.510 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.770 "name": "raid_bdev1", 00:12:52.770 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:52.770 "strip_size_kb": 0, 00:12:52.770 "state": "online", 00:12:52.770 "raid_level": "raid1", 00:12:52.770 "superblock": true, 00:12:52.770 "num_base_bdevs": 4, 00:12:52.770 "num_base_bdevs_discovered": 3, 00:12:52.770 "num_base_bdevs_operational": 3, 00:12:52.770 "base_bdevs_list": [ 00:12:52.770 { 00:12:52.770 "name": "spare", 00:12:52.770 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:52.770 "is_configured": true, 00:12:52.770 "data_offset": 2048, 00:12:52.770 "data_size": 63488 00:12:52.770 }, 00:12:52.770 { 00:12:52.770 "name": null, 00:12:52.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.770 "is_configured": false, 00:12:52.770 "data_offset": 0, 00:12:52.770 "data_size": 63488 00:12:52.770 }, 00:12:52.770 { 00:12:52.770 "name": "BaseBdev3", 00:12:52.770 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:52.770 "is_configured": true, 00:12:52.770 "data_offset": 2048, 00:12:52.770 "data_size": 63488 00:12:52.770 }, 00:12:52.770 { 00:12:52.770 "name": "BaseBdev4", 00:12:52.770 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:52.770 "is_configured": true, 00:12:52.770 "data_offset": 2048, 00:12:52.770 "data_size": 63488 00:12:52.770 } 00:12:52.770 ] 00:12:52.770 }' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.770 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.770 "name": "raid_bdev1", 00:12:52.770 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:52.770 "strip_size_kb": 0, 00:12:52.770 "state": "online", 00:12:52.771 "raid_level": "raid1", 00:12:52.771 "superblock": true, 00:12:52.771 "num_base_bdevs": 4, 00:12:52.771 "num_base_bdevs_discovered": 3, 00:12:52.771 "num_base_bdevs_operational": 3, 00:12:52.771 "base_bdevs_list": [ 00:12:52.771 { 00:12:52.771 "name": "spare", 00:12:52.771 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": null, 00:12:52.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.771 "is_configured": false, 00:12:52.771 "data_offset": 0, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": "BaseBdev3", 00:12:52.771 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": "BaseBdev4", 00:12:52.771 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 } 00:12:52.771 ] 00:12:52.771 }' 00:12:52.771 22:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.771 22:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.340 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.340 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.340 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.340 [2024-11-26 22:57:32.203207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.340 [2024-11-26 22:57:32.203299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.340 [2024-11-26 22:57:32.203426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.341 [2024-11-26 22:57:32.203531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.341 [2024-11-26 22:57:32.203573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.341 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:53.341 /dev/nbd0 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.601 1+0 records in 00:12:53.601 1+0 records out 00:12:53.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399703 s, 10.2 MB/s 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.601 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:53.601 /dev/nbd1 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.862 1+0 records in 00:12:53.862 1+0 records out 00:12:53.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431813 s, 9.5 MB/s 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.862 22:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.122 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.382 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.382 [2024-11-26 22:57:33.286398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.382 [2024-11-26 22:57:33.286477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.382 [2024-11-26 22:57:33.286506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:54.383 [2024-11-26 22:57:33.286522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.383 [2024-11-26 22:57:33.288709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.383 [2024-11-26 22:57:33.288790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.383 [2024-11-26 22:57:33.288877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:54.383 [2024-11-26 22:57:33.288926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.383 [2024-11-26 22:57:33.289041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.383 [2024-11-26 22:57:33.289141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.383 spare 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.383 [2024-11-26 22:57:33.389201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.383 [2024-11-26 22:57:33.389225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.383 [2024-11-26 22:57:33.389530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:12:54.383 [2024-11-26 22:57:33.389666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.383 [2024-11-26 22:57:33.389684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.383 [2024-11-26 22:57:33.389799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.383 "name": "raid_bdev1", 00:12:54.383 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:54.383 "strip_size_kb": 0, 00:12:54.383 "state": "online", 00:12:54.383 "raid_level": "raid1", 00:12:54.383 "superblock": true, 00:12:54.383 "num_base_bdevs": 4, 00:12:54.383 "num_base_bdevs_discovered": 3, 00:12:54.383 "num_base_bdevs_operational": 3, 00:12:54.383 "base_bdevs_list": [ 00:12:54.383 { 00:12:54.383 "name": "spare", 00:12:54.383 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:54.383 "is_configured": true, 00:12:54.383 "data_offset": 2048, 00:12:54.383 "data_size": 63488 00:12:54.383 }, 00:12:54.383 { 00:12:54.383 "name": null, 00:12:54.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.383 "is_configured": false, 00:12:54.383 "data_offset": 2048, 00:12:54.383 "data_size": 63488 00:12:54.383 }, 00:12:54.383 { 00:12:54.383 "name": "BaseBdev3", 00:12:54.383 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:54.383 "is_configured": true, 00:12:54.383 "data_offset": 2048, 00:12:54.383 "data_size": 63488 00:12:54.383 }, 00:12:54.383 { 00:12:54.383 "name": "BaseBdev4", 00:12:54.383 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:54.383 "is_configured": true, 00:12:54.383 "data_offset": 2048, 00:12:54.383 "data_size": 63488 00:12:54.383 } 00:12:54.383 ] 00:12:54.383 }' 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.383 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.954 "name": "raid_bdev1", 00:12:54.954 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:54.954 "strip_size_kb": 0, 00:12:54.954 "state": "online", 00:12:54.954 "raid_level": "raid1", 00:12:54.954 "superblock": true, 00:12:54.954 "num_base_bdevs": 4, 00:12:54.954 "num_base_bdevs_discovered": 3, 00:12:54.954 "num_base_bdevs_operational": 3, 00:12:54.954 "base_bdevs_list": [ 00:12:54.954 { 00:12:54.954 "name": "spare", 00:12:54.954 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:54.954 "is_configured": true, 00:12:54.954 "data_offset": 2048, 00:12:54.954 "data_size": 63488 00:12:54.954 }, 00:12:54.954 { 00:12:54.954 "name": null, 00:12:54.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.954 "is_configured": false, 00:12:54.954 "data_offset": 2048, 00:12:54.954 "data_size": 63488 00:12:54.954 }, 00:12:54.954 { 00:12:54.954 "name": "BaseBdev3", 00:12:54.954 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:54.954 "is_configured": true, 00:12:54.954 "data_offset": 2048, 00:12:54.954 "data_size": 63488 00:12:54.954 }, 00:12:54.954 { 00:12:54.954 "name": "BaseBdev4", 00:12:54.954 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:54.954 "is_configured": true, 00:12:54.954 "data_offset": 2048, 00:12:54.954 "data_size": 63488 00:12:54.954 } 00:12:54.954 ] 00:12:54.954 }' 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.954 22:57:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.954 [2024-11-26 22:57:34.034698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.954 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.214 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.214 "name": "raid_bdev1", 00:12:55.214 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:55.214 "strip_size_kb": 0, 00:12:55.214 "state": "online", 00:12:55.214 "raid_level": "raid1", 00:12:55.214 "superblock": true, 00:12:55.214 "num_base_bdevs": 4, 00:12:55.214 "num_base_bdevs_discovered": 2, 00:12:55.214 "num_base_bdevs_operational": 2, 00:12:55.214 "base_bdevs_list": [ 00:12:55.214 { 00:12:55.214 "name": null, 00:12:55.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.214 "is_configured": false, 00:12:55.214 "data_offset": 0, 00:12:55.214 "data_size": 63488 00:12:55.214 }, 00:12:55.214 { 00:12:55.214 "name": null, 00:12:55.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.214 "is_configured": false, 00:12:55.214 "data_offset": 2048, 00:12:55.214 "data_size": 63488 00:12:55.214 }, 00:12:55.214 { 00:12:55.214 "name": "BaseBdev3", 00:12:55.214 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:55.214 "is_configured": true, 00:12:55.214 "data_offset": 2048, 00:12:55.214 "data_size": 63488 00:12:55.214 }, 00:12:55.214 { 00:12:55.214 "name": "BaseBdev4", 00:12:55.214 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:55.214 "is_configured": true, 00:12:55.214 "data_offset": 2048, 00:12:55.214 "data_size": 63488 00:12:55.214 } 00:12:55.214 ] 00:12:55.214 }' 00:12:55.214 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.214 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.474 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.474 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 [2024-11-26 22:57:34.438810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.474 [2024-11-26 22:57:34.439021] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:55.474 [2024-11-26 22:57:34.439078] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.474 [2024-11-26 22:57:34.439154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.474 [2024-11-26 22:57:34.443361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:12:55.474 22:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.474 22:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:55.474 [2024-11-26 22:57:34.445298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.412 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.412 "name": "raid_bdev1", 00:12:56.412 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:56.412 "strip_size_kb": 0, 00:12:56.412 "state": "online", 00:12:56.412 "raid_level": "raid1", 00:12:56.412 "superblock": true, 00:12:56.412 "num_base_bdevs": 4, 00:12:56.412 "num_base_bdevs_discovered": 3, 00:12:56.412 "num_base_bdevs_operational": 3, 00:12:56.412 "process": { 00:12:56.412 "type": "rebuild", 00:12:56.412 "target": "spare", 00:12:56.412 "progress": { 00:12:56.412 "blocks": 20480, 00:12:56.412 "percent": 32 00:12:56.412 } 00:12:56.412 }, 00:12:56.412 "base_bdevs_list": [ 00:12:56.412 { 00:12:56.413 "name": "spare", 00:12:56.413 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:56.413 "is_configured": true, 00:12:56.413 "data_offset": 2048, 00:12:56.413 "data_size": 63488 00:12:56.413 }, 00:12:56.413 { 00:12:56.413 "name": null, 00:12:56.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.413 "is_configured": false, 00:12:56.413 "data_offset": 2048, 00:12:56.413 "data_size": 63488 00:12:56.413 }, 00:12:56.413 { 00:12:56.413 "name": "BaseBdev3", 00:12:56.413 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:56.413 "is_configured": true, 00:12:56.413 "data_offset": 2048, 00:12:56.413 "data_size": 63488 00:12:56.413 }, 00:12:56.413 { 00:12:56.413 "name": "BaseBdev4", 00:12:56.413 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:56.413 "is_configured": true, 00:12:56.413 "data_offset": 2048, 00:12:56.413 "data_size": 63488 00:12:56.413 } 00:12:56.413 ] 00:12:56.413 }' 00:12:56.413 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.673 [2024-11-26 22:57:35.608841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.673 [2024-11-26 22:57:35.651431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.673 [2024-11-26 22:57:35.651530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.673 [2024-11-26 22:57:35.651567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.673 [2024-11-26 22:57:35.651587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.673 "name": "raid_bdev1", 00:12:56.673 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:56.673 "strip_size_kb": 0, 00:12:56.673 "state": "online", 00:12:56.673 "raid_level": "raid1", 00:12:56.673 "superblock": true, 00:12:56.673 "num_base_bdevs": 4, 00:12:56.673 "num_base_bdevs_discovered": 2, 00:12:56.673 "num_base_bdevs_operational": 2, 00:12:56.673 "base_bdevs_list": [ 00:12:56.673 { 00:12:56.673 "name": null, 00:12:56.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.673 "is_configured": false, 00:12:56.673 "data_offset": 0, 00:12:56.673 "data_size": 63488 00:12:56.673 }, 00:12:56.673 { 00:12:56.673 "name": null, 00:12:56.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.673 "is_configured": false, 00:12:56.673 "data_offset": 2048, 00:12:56.673 "data_size": 63488 00:12:56.673 }, 00:12:56.673 { 00:12:56.673 "name": "BaseBdev3", 00:12:56.673 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:56.673 "is_configured": true, 00:12:56.673 "data_offset": 2048, 00:12:56.673 "data_size": 63488 00:12:56.673 }, 00:12:56.673 { 00:12:56.673 "name": "BaseBdev4", 00:12:56.673 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:56.673 "is_configured": true, 00:12:56.673 "data_offset": 2048, 00:12:56.673 "data_size": 63488 00:12:56.673 } 00:12:56.673 ] 00:12:56.673 }' 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.673 22:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.244 22:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:57.244 22:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.244 22:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.244 [2024-11-26 22:57:36.115704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:57.244 [2024-11-26 22:57:36.115803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.244 [2024-11-26 22:57:36.115835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:57.244 [2024-11-26 22:57:36.115844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.244 [2024-11-26 22:57:36.116314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.244 [2024-11-26 22:57:36.116333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:57.244 [2024-11-26 22:57:36.116419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:57.244 [2024-11-26 22:57:36.116430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:57.244 [2024-11-26 22:57:36.116440] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:57.244 [2024-11-26 22:57:36.116461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.244 [2024-11-26 22:57:36.120365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:12:57.244 spare 00:12:57.244 22:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.244 22:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:57.244 [2024-11-26 22:57:36.122242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.184 "name": "raid_bdev1", 00:12:58.184 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:58.184 "strip_size_kb": 0, 00:12:58.184 "state": "online", 00:12:58.184 "raid_level": "raid1", 00:12:58.184 "superblock": true, 00:12:58.184 "num_base_bdevs": 4, 00:12:58.184 "num_base_bdevs_discovered": 3, 00:12:58.184 "num_base_bdevs_operational": 3, 00:12:58.184 "process": { 00:12:58.184 "type": "rebuild", 00:12:58.184 "target": "spare", 00:12:58.184 "progress": { 00:12:58.184 "blocks": 20480, 00:12:58.184 "percent": 32 00:12:58.184 } 00:12:58.184 }, 00:12:58.184 "base_bdevs_list": [ 00:12:58.184 { 00:12:58.184 "name": "spare", 00:12:58.184 "uuid": "3ea1ed2c-1a8c-53e8-9705-df8aaa656af3", 00:12:58.184 "is_configured": true, 00:12:58.184 "data_offset": 2048, 00:12:58.184 "data_size": 63488 00:12:58.184 }, 00:12:58.184 { 00:12:58.184 "name": null, 00:12:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.184 "is_configured": false, 00:12:58.184 "data_offset": 2048, 00:12:58.184 "data_size": 63488 00:12:58.184 }, 00:12:58.184 { 00:12:58.184 "name": "BaseBdev3", 00:12:58.184 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:58.184 "is_configured": true, 00:12:58.184 "data_offset": 2048, 00:12:58.184 "data_size": 63488 00:12:58.184 }, 00:12:58.184 { 00:12:58.184 "name": "BaseBdev4", 00:12:58.184 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:58.184 "is_configured": true, 00:12:58.184 "data_offset": 2048, 00:12:58.184 "data_size": 63488 00:12:58.184 } 00:12:58.184 ] 00:12:58.184 }' 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.184 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.184 [2024-11-26 22:57:37.287585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.445 [2024-11-26 22:57:37.328432] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:58.445 [2024-11-26 22:57:37.328504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.445 [2024-11-26 22:57:37.328520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.445 [2024-11-26 22:57:37.328539] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.445 "name": "raid_bdev1", 00:12:58.445 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:58.445 "strip_size_kb": 0, 00:12:58.445 "state": "online", 00:12:58.445 "raid_level": "raid1", 00:12:58.445 "superblock": true, 00:12:58.445 "num_base_bdevs": 4, 00:12:58.445 "num_base_bdevs_discovered": 2, 00:12:58.445 "num_base_bdevs_operational": 2, 00:12:58.445 "base_bdevs_list": [ 00:12:58.445 { 00:12:58.445 "name": null, 00:12:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.445 "is_configured": false, 00:12:58.445 "data_offset": 0, 00:12:58.445 "data_size": 63488 00:12:58.445 }, 00:12:58.445 { 00:12:58.445 "name": null, 00:12:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.445 "is_configured": false, 00:12:58.445 "data_offset": 2048, 00:12:58.445 "data_size": 63488 00:12:58.445 }, 00:12:58.445 { 00:12:58.445 "name": "BaseBdev3", 00:12:58.445 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:58.445 "is_configured": true, 00:12:58.445 "data_offset": 2048, 00:12:58.445 "data_size": 63488 00:12:58.445 }, 00:12:58.445 { 00:12:58.445 "name": "BaseBdev4", 00:12:58.445 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:58.445 "is_configured": true, 00:12:58.445 "data_offset": 2048, 00:12:58.445 "data_size": 63488 00:12:58.445 } 00:12:58.445 ] 00:12:58.445 }' 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.445 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.704 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.964 "name": "raid_bdev1", 00:12:58.964 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:58.964 "strip_size_kb": 0, 00:12:58.964 "state": "online", 00:12:58.964 "raid_level": "raid1", 00:12:58.964 "superblock": true, 00:12:58.964 "num_base_bdevs": 4, 00:12:58.964 "num_base_bdevs_discovered": 2, 00:12:58.964 "num_base_bdevs_operational": 2, 00:12:58.964 "base_bdevs_list": [ 00:12:58.964 { 00:12:58.965 "name": null, 00:12:58.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.965 "is_configured": false, 00:12:58.965 "data_offset": 0, 00:12:58.965 "data_size": 63488 00:12:58.965 }, 00:12:58.965 { 00:12:58.965 "name": null, 00:12:58.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.965 "is_configured": false, 00:12:58.965 "data_offset": 2048, 00:12:58.965 "data_size": 63488 00:12:58.965 }, 00:12:58.965 { 00:12:58.965 "name": "BaseBdev3", 00:12:58.965 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:58.965 "is_configured": true, 00:12:58.965 "data_offset": 2048, 00:12:58.965 "data_size": 63488 00:12:58.965 }, 00:12:58.965 { 00:12:58.965 "name": "BaseBdev4", 00:12:58.965 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:58.965 "is_configured": true, 00:12:58.965 "data_offset": 2048, 00:12:58.965 "data_size": 63488 00:12:58.965 } 00:12:58.965 ] 00:12:58.965 }' 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.965 [2024-11-26 22:57:37.952847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:58.965 [2024-11-26 22:57:37.952948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.965 [2024-11-26 22:57:37.952971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:58.965 [2024-11-26 22:57:37.952982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.965 [2024-11-26 22:57:37.953407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.965 [2024-11-26 22:57:37.953431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:58.965 [2024-11-26 22:57:37.953496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:58.965 [2024-11-26 22:57:37.953522] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:58.965 [2024-11-26 22:57:37.953530] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.965 [2024-11-26 22:57:37.953541] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:58.965 BaseBdev1 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.965 22:57:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 22:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.905 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.905 "name": "raid_bdev1", 00:12:59.905 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:12:59.905 "strip_size_kb": 0, 00:12:59.905 "state": "online", 00:12:59.905 "raid_level": "raid1", 00:12:59.905 "superblock": true, 00:12:59.905 "num_base_bdevs": 4, 00:12:59.905 "num_base_bdevs_discovered": 2, 00:12:59.905 "num_base_bdevs_operational": 2, 00:12:59.905 "base_bdevs_list": [ 00:12:59.905 { 00:12:59.905 "name": null, 00:12:59.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.905 "is_configured": false, 00:12:59.905 "data_offset": 0, 00:12:59.905 "data_size": 63488 00:12:59.905 }, 00:12:59.905 { 00:12:59.905 "name": null, 00:12:59.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.905 "is_configured": false, 00:12:59.905 "data_offset": 2048, 00:12:59.905 "data_size": 63488 00:12:59.905 }, 00:12:59.905 { 00:12:59.905 "name": "BaseBdev3", 00:12:59.905 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:12:59.905 "is_configured": true, 00:12:59.905 "data_offset": 2048, 00:12:59.905 "data_size": 63488 00:12:59.905 }, 00:12:59.905 { 00:12:59.905 "name": "BaseBdev4", 00:12:59.905 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:12:59.905 "is_configured": true, 00:12:59.905 "data_offset": 2048, 00:12:59.905 "data_size": 63488 00:12:59.905 } 00:12:59.905 ] 00:12:59.905 }' 00:12:59.905 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.905 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.493 "name": "raid_bdev1", 00:13:00.493 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:13:00.493 "strip_size_kb": 0, 00:13:00.493 "state": "online", 00:13:00.493 "raid_level": "raid1", 00:13:00.493 "superblock": true, 00:13:00.493 "num_base_bdevs": 4, 00:13:00.493 "num_base_bdevs_discovered": 2, 00:13:00.493 "num_base_bdevs_operational": 2, 00:13:00.493 "base_bdevs_list": [ 00:13:00.493 { 00:13:00.493 "name": null, 00:13:00.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.493 "is_configured": false, 00:13:00.493 "data_offset": 0, 00:13:00.493 "data_size": 63488 00:13:00.493 }, 00:13:00.493 { 00:13:00.493 "name": null, 00:13:00.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.493 "is_configured": false, 00:13:00.493 "data_offset": 2048, 00:13:00.493 "data_size": 63488 00:13:00.493 }, 00:13:00.493 { 00:13:00.493 "name": "BaseBdev3", 00:13:00.493 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:13:00.493 "is_configured": true, 00:13:00.493 "data_offset": 2048, 00:13:00.493 "data_size": 63488 00:13:00.493 }, 00:13:00.493 { 00:13:00.493 "name": "BaseBdev4", 00:13:00.493 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:13:00.493 "is_configured": true, 00:13:00.493 "data_offset": 2048, 00:13:00.493 "data_size": 63488 00:13:00.493 } 00:13:00.493 ] 00:13:00.493 }' 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.493 [2024-11-26 22:57:39.561331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.493 [2024-11-26 22:57:39.561516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:00.493 [2024-11-26 22:57:39.561568] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:00.493 request: 00:13:00.493 { 00:13:00.493 "base_bdev": "BaseBdev1", 00:13:00.493 "raid_bdev": "raid_bdev1", 00:13:00.493 "method": "bdev_raid_add_base_bdev", 00:13:00.493 "req_id": 1 00:13:00.493 } 00:13:00.493 Got JSON-RPC error response 00:13:00.493 response: 00:13:00.493 { 00:13:00.493 "code": -22, 00:13:00.493 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:00.493 } 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.493 22:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.875 "name": "raid_bdev1", 00:13:01.875 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:13:01.875 "strip_size_kb": 0, 00:13:01.875 "state": "online", 00:13:01.875 "raid_level": "raid1", 00:13:01.875 "superblock": true, 00:13:01.875 "num_base_bdevs": 4, 00:13:01.875 "num_base_bdevs_discovered": 2, 00:13:01.875 "num_base_bdevs_operational": 2, 00:13:01.875 "base_bdevs_list": [ 00:13:01.875 { 00:13:01.875 "name": null, 00:13:01.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.875 "is_configured": false, 00:13:01.875 "data_offset": 0, 00:13:01.875 "data_size": 63488 00:13:01.875 }, 00:13:01.875 { 00:13:01.875 "name": null, 00:13:01.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.875 "is_configured": false, 00:13:01.875 "data_offset": 2048, 00:13:01.875 "data_size": 63488 00:13:01.875 }, 00:13:01.875 { 00:13:01.875 "name": "BaseBdev3", 00:13:01.875 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:13:01.875 "is_configured": true, 00:13:01.875 "data_offset": 2048, 00:13:01.875 "data_size": 63488 00:13:01.875 }, 00:13:01.875 { 00:13:01.875 "name": "BaseBdev4", 00:13:01.875 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:13:01.875 "is_configured": true, 00:13:01.875 "data_offset": 2048, 00:13:01.875 "data_size": 63488 00:13:01.875 } 00:13:01.875 ] 00:13:01.875 }' 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.875 22:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.136 "name": "raid_bdev1", 00:13:02.136 "uuid": "83e72528-d2dc-41ce-833b-fd25ca94bf18", 00:13:02.136 "strip_size_kb": 0, 00:13:02.136 "state": "online", 00:13:02.136 "raid_level": "raid1", 00:13:02.136 "superblock": true, 00:13:02.136 "num_base_bdevs": 4, 00:13:02.136 "num_base_bdevs_discovered": 2, 00:13:02.136 "num_base_bdevs_operational": 2, 00:13:02.136 "base_bdevs_list": [ 00:13:02.136 { 00:13:02.136 "name": null, 00:13:02.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.136 "is_configured": false, 00:13:02.136 "data_offset": 0, 00:13:02.136 "data_size": 63488 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "name": null, 00:13:02.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.136 "is_configured": false, 00:13:02.136 "data_offset": 2048, 00:13:02.136 "data_size": 63488 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "name": "BaseBdev3", 00:13:02.136 "uuid": "6af9d8e8-ccc4-5bd1-9738-674b67461a4c", 00:13:02.136 "is_configured": true, 00:13:02.136 "data_offset": 2048, 00:13:02.136 "data_size": 63488 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "name": "BaseBdev4", 00:13:02.136 "uuid": "8c7f327b-d63b-5ac7-ad0e-3e796958541e", 00:13:02.136 "is_configured": true, 00:13:02.136 "data_offset": 2048, 00:13:02.136 "data_size": 63488 00:13:02.136 } 00:13:02.136 ] 00:13:02.136 }' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90255 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90255 ']' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90255 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90255 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.136 killing process with pid 90255 00:13:02.136 Received shutdown signal, test time was about 60.000000 seconds 00:13:02.136 00:13:02.136 Latency(us) 00:13:02.136 [2024-11-26T22:57:41.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.136 [2024-11-26T22:57:41.264Z] =================================================================================================================== 00:13:02.136 [2024-11-26T22:57:41.264Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90255' 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90255 00:13:02.136 [2024-11-26 22:57:41.218558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.136 [2024-11-26 22:57:41.218668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.136 [2024-11-26 22:57:41.218733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.136 [2024-11-26 22:57:41.218742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:02.136 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90255 00:13:02.396 [2024-11-26 22:57:41.270246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.396 22:57:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:02.396 00:13:02.396 real 0m23.194s 00:13:02.396 user 0m28.210s 00:13:02.396 sys 0m3.940s 00:13:02.396 ************************************ 00:13:02.396 END TEST raid_rebuild_test_sb 00:13:02.396 ************************************ 00:13:02.396 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.396 22:57:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.657 22:57:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:02.657 22:57:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:02.657 22:57:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.657 22:57:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.657 ************************************ 00:13:02.657 START TEST raid_rebuild_test_io 00:13:02.657 ************************************ 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90998 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90998 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 90998 ']' 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.657 22:57:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.657 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.657 Zero copy mechanism will not be used. 00:13:02.657 [2024-11-26 22:57:41.680643] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:02.657 [2024-11-26 22:57:41.680758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90998 ] 00:13:02.917 [2024-11-26 22:57:41.815113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:02.917 [2024-11-26 22:57:41.855102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.917 [2024-11-26 22:57:41.882656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.917 [2024-11-26 22:57:41.926376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.917 [2024-11-26 22:57:41.926410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 BaseBdev1_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 [2024-11-26 22:57:42.515556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.488 [2024-11-26 22:57:42.515618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.488 [2024-11-26 22:57:42.515643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:03.488 [2024-11-26 22:57:42.515665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.488 [2024-11-26 22:57:42.517921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.488 [2024-11-26 22:57:42.518009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.488 BaseBdev1 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 BaseBdev2_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 [2024-11-26 22:57:42.544441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:03.488 [2024-11-26 22:57:42.544532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.488 [2024-11-26 22:57:42.544570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:03.488 [2024-11-26 22:57:42.544581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.488 [2024-11-26 22:57:42.546716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.488 [2024-11-26 22:57:42.546753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.488 BaseBdev2 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 BaseBdev3_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 [2024-11-26 22:57:42.573184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:03.488 [2024-11-26 22:57:42.573237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.488 [2024-11-26 22:57:42.573292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:03.488 [2024-11-26 22:57:42.573304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.488 [2024-11-26 22:57:42.575336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.488 [2024-11-26 22:57:42.575424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.488 BaseBdev3 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.488 BaseBdev4_malloc 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.488 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 [2024-11-26 22:57:42.616527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:03.761 [2024-11-26 22:57:42.616730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.762 [2024-11-26 22:57:42.616783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:03.762 [2024-11-26 22:57:42.616807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.762 [2024-11-26 22:57:42.620257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.762 [2024-11-26 22:57:42.620325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:03.762 BaseBdev4 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.762 spare_malloc 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.762 spare_delay 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.762 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.762 [2024-11-26 22:57:42.658183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.762 [2024-11-26 22:57:42.658230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.762 [2024-11-26 22:57:42.658266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:03.762 [2024-11-26 22:57:42.658293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.762 [2024-11-26 22:57:42.660335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.763 [2024-11-26 22:57:42.660371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.763 spare 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.763 [2024-11-26 22:57:42.670256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.763 [2024-11-26 22:57:42.672062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.763 [2024-11-26 22:57:42.672116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.763 [2024-11-26 22:57:42.672156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.763 [2024-11-26 22:57:42.672224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:03.763 [2024-11-26 22:57:42.672235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:03.763 [2024-11-26 22:57:42.672480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:03.763 [2024-11-26 22:57:42.672616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:03.763 [2024-11-26 22:57:42.672626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:03.763 [2024-11-26 22:57:42.672735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.763 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.764 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.764 "name": "raid_bdev1", 00:13:03.764 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:03.764 "strip_size_kb": 0, 00:13:03.764 "state": "online", 00:13:03.764 "raid_level": "raid1", 00:13:03.764 "superblock": false, 00:13:03.764 "num_base_bdevs": 4, 00:13:03.764 "num_base_bdevs_discovered": 4, 00:13:03.764 "num_base_bdevs_operational": 4, 00:13:03.764 "base_bdevs_list": [ 00:13:03.764 { 00:13:03.764 "name": "BaseBdev1", 00:13:03.764 "uuid": "e1fe5526-bb11-54b6-a1e5-a1bd36b107d3", 00:13:03.764 "is_configured": true, 00:13:03.764 "data_offset": 0, 00:13:03.764 "data_size": 65536 00:13:03.764 }, 00:13:03.764 { 00:13:03.764 "name": "BaseBdev2", 00:13:03.764 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:03.764 "is_configured": true, 00:13:03.764 "data_offset": 0, 00:13:03.764 "data_size": 65536 00:13:03.764 }, 00:13:03.764 { 00:13:03.764 "name": "BaseBdev3", 00:13:03.764 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:03.764 "is_configured": true, 00:13:03.764 "data_offset": 0, 00:13:03.764 "data_size": 65536 00:13:03.764 }, 00:13:03.764 { 00:13:03.764 "name": "BaseBdev4", 00:13:03.764 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:03.765 "is_configured": true, 00:13:03.765 "data_offset": 0, 00:13:03.765 "data_size": 65536 00:13:03.765 } 00:13:03.765 ] 00:13:03.765 }' 00:13:03.765 22:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.765 22:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.024 [2024-11-26 22:57:43.098612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.024 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.284 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.285 [2024-11-26 22:57:43.190345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.285 "name": "raid_bdev1", 00:13:04.285 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:04.285 "strip_size_kb": 0, 00:13:04.285 "state": "online", 00:13:04.285 "raid_level": "raid1", 00:13:04.285 "superblock": false, 00:13:04.285 "num_base_bdevs": 4, 00:13:04.285 "num_base_bdevs_discovered": 3, 00:13:04.285 "num_base_bdevs_operational": 3, 00:13:04.285 "base_bdevs_list": [ 00:13:04.285 { 00:13:04.285 "name": null, 00:13:04.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.285 "is_configured": false, 00:13:04.285 "data_offset": 0, 00:13:04.285 "data_size": 65536 00:13:04.285 }, 00:13:04.285 { 00:13:04.285 "name": "BaseBdev2", 00:13:04.285 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:04.285 "is_configured": true, 00:13:04.285 "data_offset": 0, 00:13:04.285 "data_size": 65536 00:13:04.285 }, 00:13:04.285 { 00:13:04.285 "name": "BaseBdev3", 00:13:04.285 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:04.285 "is_configured": true, 00:13:04.285 "data_offset": 0, 00:13:04.285 "data_size": 65536 00:13:04.285 }, 00:13:04.285 { 00:13:04.285 "name": "BaseBdev4", 00:13:04.285 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:04.285 "is_configured": true, 00:13:04.285 "data_offset": 0, 00:13:04.285 "data_size": 65536 00:13:04.285 } 00:13:04.285 ] 00:13:04.285 }' 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.285 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.285 [2024-11-26 22:57:43.284390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:04.285 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:04.285 Zero copy mechanism will not be used. 00:13:04.285 Running I/O for 60 seconds... 00:13:04.544 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.544 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.544 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.544 [2024-11-26 22:57:43.665318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.804 22:57:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.804 22:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:04.804 [2024-11-26 22:57:43.720299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:04.804 [2024-11-26 22:57:43.722321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.804 [2024-11-26 22:57:43.831373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.804 [2024-11-26 22:57:43.831975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.064 [2024-11-26 22:57:44.055003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.064 [2024-11-26 22:57:44.055769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.324 207.00 IOPS, 621.00 MiB/s [2024-11-26T22:57:44.452Z] [2024-11-26 22:57:44.404892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:05.324 [2024-11-26 22:57:44.405497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.583 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.844 "name": "raid_bdev1", 00:13:05.844 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:05.844 "strip_size_kb": 0, 00:13:05.844 "state": "online", 00:13:05.844 "raid_level": "raid1", 00:13:05.844 "superblock": false, 00:13:05.844 "num_base_bdevs": 4, 00:13:05.844 "num_base_bdevs_discovered": 4, 00:13:05.844 "num_base_bdevs_operational": 4, 00:13:05.844 "process": { 00:13:05.844 "type": "rebuild", 00:13:05.844 "target": "spare", 00:13:05.844 "progress": { 00:13:05.844 "blocks": 12288, 00:13:05.844 "percent": 18 00:13:05.844 } 00:13:05.844 }, 00:13:05.844 "base_bdevs_list": [ 00:13:05.844 { 00:13:05.844 "name": "spare", 00:13:05.844 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:05.844 "is_configured": true, 00:13:05.844 "data_offset": 0, 00:13:05.844 "data_size": 65536 00:13:05.844 }, 00:13:05.844 { 00:13:05.844 "name": "BaseBdev2", 00:13:05.844 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:05.844 "is_configured": true, 00:13:05.844 "data_offset": 0, 00:13:05.844 "data_size": 65536 00:13:05.844 }, 00:13:05.844 { 00:13:05.844 "name": "BaseBdev3", 00:13:05.844 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:05.844 "is_configured": true, 00:13:05.844 "data_offset": 0, 00:13:05.844 "data_size": 65536 00:13:05.844 }, 00:13:05.844 { 00:13:05.844 "name": "BaseBdev4", 00:13:05.844 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:05.844 "is_configured": true, 00:13:05.844 "data_offset": 0, 00:13:05.844 "data_size": 65536 00:13:05.844 } 00:13:05.844 ] 00:13:05.844 }' 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.844 [2024-11-26 22:57:44.802938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.844 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.844 [2024-11-26 22:57:44.862534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.844 [2024-11-26 22:57:44.919901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.844 [2024-11-26 22:57:44.947486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.844 [2024-11-26 22:57:44.956701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.844 [2024-11-26 22:57:44.956750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.844 [2024-11-26 22:57:44.956764] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.105 [2024-11-26 22:57:44.985368] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.105 22:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.105 "name": "raid_bdev1", 00:13:06.105 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:06.105 "strip_size_kb": 0, 00:13:06.105 "state": "online", 00:13:06.105 "raid_level": "raid1", 00:13:06.105 "superblock": false, 00:13:06.105 "num_base_bdevs": 4, 00:13:06.105 "num_base_bdevs_discovered": 3, 00:13:06.105 "num_base_bdevs_operational": 3, 00:13:06.105 "base_bdevs_list": [ 00:13:06.105 { 00:13:06.105 "name": null, 00:13:06.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.105 "is_configured": false, 00:13:06.105 "data_offset": 0, 00:13:06.105 "data_size": 65536 00:13:06.105 }, 00:13:06.105 { 00:13:06.105 "name": "BaseBdev2", 00:13:06.105 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:06.105 "is_configured": true, 00:13:06.105 "data_offset": 0, 00:13:06.105 "data_size": 65536 00:13:06.105 }, 00:13:06.105 { 00:13:06.105 "name": "BaseBdev3", 00:13:06.105 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:06.105 "is_configured": true, 00:13:06.105 "data_offset": 0, 00:13:06.105 "data_size": 65536 00:13:06.105 }, 00:13:06.105 { 00:13:06.105 "name": "BaseBdev4", 00:13:06.105 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:06.105 "is_configured": true, 00:13:06.105 "data_offset": 0, 00:13:06.105 "data_size": 65536 00:13:06.105 } 00:13:06.105 ] 00:13:06.105 }' 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.105 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.365 184.00 IOPS, 552.00 MiB/s [2024-11-26T22:57:45.493Z] 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.365 "name": "raid_bdev1", 00:13:06.365 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:06.365 "strip_size_kb": 0, 00:13:06.365 "state": "online", 00:13:06.365 "raid_level": "raid1", 00:13:06.365 "superblock": false, 00:13:06.365 "num_base_bdevs": 4, 00:13:06.365 "num_base_bdevs_discovered": 3, 00:13:06.365 "num_base_bdevs_operational": 3, 00:13:06.365 "base_bdevs_list": [ 00:13:06.365 { 00:13:06.365 "name": null, 00:13:06.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.365 "is_configured": false, 00:13:06.365 "data_offset": 0, 00:13:06.365 "data_size": 65536 00:13:06.365 }, 00:13:06.365 { 00:13:06.365 "name": "BaseBdev2", 00:13:06.365 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:06.365 "is_configured": true, 00:13:06.365 "data_offset": 0, 00:13:06.365 "data_size": 65536 00:13:06.365 }, 00:13:06.365 { 00:13:06.365 "name": "BaseBdev3", 00:13:06.365 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:06.365 "is_configured": true, 00:13:06.365 "data_offset": 0, 00:13:06.365 "data_size": 65536 00:13:06.365 }, 00:13:06.365 { 00:13:06.365 "name": "BaseBdev4", 00:13:06.365 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:06.365 "is_configured": true, 00:13:06.365 "data_offset": 0, 00:13:06.365 "data_size": 65536 00:13:06.365 } 00:13:06.365 ] 00:13:06.365 }' 00:13:06.365 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.626 [2024-11-26 22:57:45.584164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.626 22:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:06.626 [2024-11-26 22:57:45.631569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:06.626 [2024-11-26 22:57:45.633602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.626 [2024-11-26 22:57:45.747436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.626 [2024-11-26 22:57:45.747983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.886 [2024-11-26 22:57:45.877207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.886 [2024-11-26 22:57:45.877897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:07.145 [2024-11-26 22:57:46.207757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:07.145 [2024-11-26 22:57:46.208982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:07.405 174.67 IOPS, 524.00 MiB/s [2024-11-26T22:57:46.533Z] [2024-11-26 22:57:46.423618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:07.665 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.665 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.665 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.666 "name": "raid_bdev1", 00:13:07.666 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:07.666 "strip_size_kb": 0, 00:13:07.666 "state": "online", 00:13:07.666 "raid_level": "raid1", 00:13:07.666 "superblock": false, 00:13:07.666 "num_base_bdevs": 4, 00:13:07.666 "num_base_bdevs_discovered": 4, 00:13:07.666 "num_base_bdevs_operational": 4, 00:13:07.666 "process": { 00:13:07.666 "type": "rebuild", 00:13:07.666 "target": "spare", 00:13:07.666 "progress": { 00:13:07.666 "blocks": 10240, 00:13:07.666 "percent": 15 00:13:07.666 } 00:13:07.666 }, 00:13:07.666 "base_bdevs_list": [ 00:13:07.666 { 00:13:07.666 "name": "spare", 00:13:07.666 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:07.666 "is_configured": true, 00:13:07.666 "data_offset": 0, 00:13:07.666 "data_size": 65536 00:13:07.666 }, 00:13:07.666 { 00:13:07.666 "name": "BaseBdev2", 00:13:07.666 "uuid": "bb3cbfb7-8466-5c0e-9ca1-ddd5f3083606", 00:13:07.666 "is_configured": true, 00:13:07.666 "data_offset": 0, 00:13:07.666 "data_size": 65536 00:13:07.666 }, 00:13:07.666 { 00:13:07.666 "name": "BaseBdev3", 00:13:07.666 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:07.666 "is_configured": true, 00:13:07.666 "data_offset": 0, 00:13:07.666 "data_size": 65536 00:13:07.666 }, 00:13:07.666 { 00:13:07.666 "name": "BaseBdev4", 00:13:07.666 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:07.666 "is_configured": true, 00:13:07.666 "data_offset": 0, 00:13:07.666 "data_size": 65536 00:13:07.666 } 00:13:07.666 ] 00:13:07.666 }' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.666 [2024-11-26 22:57:46.763803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.666 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.666 [2024-11-26 22:57:46.772869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:07.926 [2024-11-26 22:57:46.979928] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:07.926 [2024-11-26 22:57:46.980015] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:07.926 [2024-11-26 22:57:46.981086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.926 22:57:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 22:57:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.926 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.926 "name": "raid_bdev1", 00:13:07.926 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:07.926 "strip_size_kb": 0, 00:13:07.926 "state": "online", 00:13:07.926 "raid_level": "raid1", 00:13:07.926 "superblock": false, 00:13:07.926 "num_base_bdevs": 4, 00:13:07.926 "num_base_bdevs_discovered": 3, 00:13:07.926 "num_base_bdevs_operational": 3, 00:13:07.926 "process": { 00:13:07.926 "type": "rebuild", 00:13:07.926 "target": "spare", 00:13:07.926 "progress": { 00:13:07.926 "blocks": 16384, 00:13:07.926 "percent": 25 00:13:07.926 } 00:13:07.926 }, 00:13:07.926 "base_bdevs_list": [ 00:13:07.926 { 00:13:07.926 "name": "spare", 00:13:07.926 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:07.926 "is_configured": true, 00:13:07.926 "data_offset": 0, 00:13:07.926 "data_size": 65536 00:13:07.926 }, 00:13:07.926 { 00:13:07.926 "name": null, 00:13:07.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.926 "is_configured": false, 00:13:07.926 "data_offset": 0, 00:13:07.926 "data_size": 65536 00:13:07.926 }, 00:13:07.926 { 00:13:07.926 "name": "BaseBdev3", 00:13:07.926 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:07.926 "is_configured": true, 00:13:07.926 "data_offset": 0, 00:13:07.926 "data_size": 65536 00:13:07.926 }, 00:13:07.926 { 00:13:07.926 "name": "BaseBdev4", 00:13:07.926 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:07.926 "is_configured": true, 00:13:07.926 "data_offset": 0, 00:13:07.926 "data_size": 65536 00:13:07.926 } 00:13:07.926 ] 00:13:07.926 }' 00:13:07.926 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.187 "name": "raid_bdev1", 00:13:08.187 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:08.187 "strip_size_kb": 0, 00:13:08.187 "state": "online", 00:13:08.187 "raid_level": "raid1", 00:13:08.187 "superblock": false, 00:13:08.187 "num_base_bdevs": 4, 00:13:08.187 "num_base_bdevs_discovered": 3, 00:13:08.187 "num_base_bdevs_operational": 3, 00:13:08.187 "process": { 00:13:08.187 "type": "rebuild", 00:13:08.187 "target": "spare", 00:13:08.187 "progress": { 00:13:08.187 "blocks": 18432, 00:13:08.187 "percent": 28 00:13:08.187 } 00:13:08.187 }, 00:13:08.187 "base_bdevs_list": [ 00:13:08.187 { 00:13:08.187 "name": "spare", 00:13:08.187 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:08.187 "is_configured": true, 00:13:08.187 "data_offset": 0, 00:13:08.187 "data_size": 65536 00:13:08.187 }, 00:13:08.187 { 00:13:08.187 "name": null, 00:13:08.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.187 "is_configured": false, 00:13:08.187 "data_offset": 0, 00:13:08.187 "data_size": 65536 00:13:08.187 }, 00:13:08.187 { 00:13:08.187 "name": "BaseBdev3", 00:13:08.187 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:08.187 "is_configured": true, 00:13:08.187 "data_offset": 0, 00:13:08.187 "data_size": 65536 00:13:08.187 }, 00:13:08.187 { 00:13:08.187 "name": "BaseBdev4", 00:13:08.187 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:08.187 "is_configured": true, 00:13:08.187 "data_offset": 0, 00:13:08.187 "data_size": 65536 00:13:08.187 } 00:13:08.187 ] 00:13:08.187 }' 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.187 [2024-11-26 22:57:47.241089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.187 [2024-11-26 22:57:47.242015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.187 22:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.447 152.25 IOPS, 456.75 MiB/s [2024-11-26T22:57:47.575Z] [2024-11-26 22:57:47.473796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:08.707 [2024-11-26 22:57:47.685497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:08.707 [2024-11-26 22:57:47.686026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:08.707 [2024-11-26 22:57:47.824025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:08.707 [2024-11-26 22:57:47.824550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:09.277 [2024-11-26 22:57:48.158555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.277 137.00 IOPS, 411.00 MiB/s [2024-11-26T22:57:48.405Z] 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.277 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.277 "name": "raid_bdev1", 00:13:09.277 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:09.277 "strip_size_kb": 0, 00:13:09.277 "state": "online", 00:13:09.277 "raid_level": "raid1", 00:13:09.277 "superblock": false, 00:13:09.277 "num_base_bdevs": 4, 00:13:09.277 "num_base_bdevs_discovered": 3, 00:13:09.277 "num_base_bdevs_operational": 3, 00:13:09.277 "process": { 00:13:09.277 "type": "rebuild", 00:13:09.277 "target": "spare", 00:13:09.277 "progress": { 00:13:09.277 "blocks": 32768, 00:13:09.277 "percent": 50 00:13:09.277 } 00:13:09.277 }, 00:13:09.277 "base_bdevs_list": [ 00:13:09.277 { 00:13:09.277 "name": "spare", 00:13:09.277 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:09.277 "is_configured": true, 00:13:09.277 "data_offset": 0, 00:13:09.277 "data_size": 65536 00:13:09.277 }, 00:13:09.277 { 00:13:09.277 "name": null, 00:13:09.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.277 "is_configured": false, 00:13:09.277 "data_offset": 0, 00:13:09.277 "data_size": 65536 00:13:09.277 }, 00:13:09.277 { 00:13:09.277 "name": "BaseBdev3", 00:13:09.277 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:09.277 "is_configured": true, 00:13:09.277 "data_offset": 0, 00:13:09.277 "data_size": 65536 00:13:09.277 }, 00:13:09.277 { 00:13:09.277 "name": "BaseBdev4", 00:13:09.278 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:09.278 "is_configured": true, 00:13:09.278 "data_offset": 0, 00:13:09.278 "data_size": 65536 00:13:09.278 } 00:13:09.278 ] 00:13:09.278 }' 00:13:09.278 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.278 [2024-11-26 22:57:48.367130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:09.278 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.537 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.538 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.538 22:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.797 [2024-11-26 22:57:48.683378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:10.365 119.50 IOPS, 358.50 MiB/s [2024-11-26T22:57:49.493Z] 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.365 22:57:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.624 "name": "raid_bdev1", 00:13:10.624 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:10.624 "strip_size_kb": 0, 00:13:10.624 "state": "online", 00:13:10.624 "raid_level": "raid1", 00:13:10.624 "superblock": false, 00:13:10.624 "num_base_bdevs": 4, 00:13:10.624 "num_base_bdevs_discovered": 3, 00:13:10.624 "num_base_bdevs_operational": 3, 00:13:10.624 "process": { 00:13:10.624 "type": "rebuild", 00:13:10.624 "target": "spare", 00:13:10.624 "progress": { 00:13:10.624 "blocks": 51200, 00:13:10.624 "percent": 78 00:13:10.624 } 00:13:10.624 }, 00:13:10.624 "base_bdevs_list": [ 00:13:10.624 { 00:13:10.624 "name": "spare", 00:13:10.624 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:10.624 "is_configured": true, 00:13:10.624 "data_offset": 0, 00:13:10.624 "data_size": 65536 00:13:10.624 }, 00:13:10.624 { 00:13:10.624 "name": null, 00:13:10.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.624 "is_configured": false, 00:13:10.624 "data_offset": 0, 00:13:10.624 "data_size": 65536 00:13:10.624 }, 00:13:10.624 { 00:13:10.624 "name": "BaseBdev3", 00:13:10.624 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:10.624 "is_configured": true, 00:13:10.624 "data_offset": 0, 00:13:10.624 "data_size": 65536 00:13:10.624 }, 00:13:10.624 { 00:13:10.624 "name": "BaseBdev4", 00:13:10.624 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:10.624 "is_configured": true, 00:13:10.624 "data_offset": 0, 00:13:10.624 "data_size": 65536 00:13:10.624 } 00:13:10.624 ] 00:13:10.624 }' 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.624 22:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.194 [2024-11-26 22:57:50.109958] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:11.194 [2024-11-26 22:57:50.209954] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:11.194 [2024-11-26 22:57:50.211451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.765 107.29 IOPS, 321.86 MiB/s [2024-11-26T22:57:50.893Z] 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.765 "name": "raid_bdev1", 00:13:11.765 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:11.765 "strip_size_kb": 0, 00:13:11.765 "state": "online", 00:13:11.765 "raid_level": "raid1", 00:13:11.765 "superblock": false, 00:13:11.765 "num_base_bdevs": 4, 00:13:11.765 "num_base_bdevs_discovered": 3, 00:13:11.765 "num_base_bdevs_operational": 3, 00:13:11.765 "base_bdevs_list": [ 00:13:11.765 { 00:13:11.765 "name": "spare", 00:13:11.765 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:11.765 "is_configured": true, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 }, 00:13:11.765 { 00:13:11.765 "name": null, 00:13:11.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.765 "is_configured": false, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 }, 00:13:11.765 { 00:13:11.765 "name": "BaseBdev3", 00:13:11.765 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:11.765 "is_configured": true, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 }, 00:13:11.765 { 00:13:11.765 "name": "BaseBdev4", 00:13:11.765 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:11.765 "is_configured": true, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 } 00:13:11.765 ] 00:13:11.765 }' 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.765 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.765 "name": "raid_bdev1", 00:13:11.765 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:11.765 "strip_size_kb": 0, 00:13:11.765 "state": "online", 00:13:11.765 "raid_level": "raid1", 00:13:11.765 "superblock": false, 00:13:11.765 "num_base_bdevs": 4, 00:13:11.765 "num_base_bdevs_discovered": 3, 00:13:11.765 "num_base_bdevs_operational": 3, 00:13:11.765 "base_bdevs_list": [ 00:13:11.765 { 00:13:11.765 "name": "spare", 00:13:11.765 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:11.765 "is_configured": true, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 }, 00:13:11.765 { 00:13:11.765 "name": null, 00:13:11.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.765 "is_configured": false, 00:13:11.765 "data_offset": 0, 00:13:11.765 "data_size": 65536 00:13:11.765 }, 00:13:11.765 { 00:13:11.765 "name": "BaseBdev3", 00:13:11.765 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:11.765 "is_configured": true, 00:13:11.765 "data_offset": 0, 00:13:11.766 "data_size": 65536 00:13:11.766 }, 00:13:11.766 { 00:13:11.766 "name": "BaseBdev4", 00:13:11.766 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:11.766 "is_configured": true, 00:13:11.766 "data_offset": 0, 00:13:11.766 "data_size": 65536 00:13:11.766 } 00:13:11.766 ] 00:13:11.766 }' 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.766 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.026 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.026 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.026 "name": "raid_bdev1", 00:13:12.026 "uuid": "d8f4bf1d-a070-42e2-864a-84593cf1013b", 00:13:12.026 "strip_size_kb": 0, 00:13:12.026 "state": "online", 00:13:12.026 "raid_level": "raid1", 00:13:12.026 "superblock": false, 00:13:12.026 "num_base_bdevs": 4, 00:13:12.026 "num_base_bdevs_discovered": 3, 00:13:12.026 "num_base_bdevs_operational": 3, 00:13:12.026 "base_bdevs_list": [ 00:13:12.026 { 00:13:12.026 "name": "spare", 00:13:12.026 "uuid": "a7243155-66d4-54bd-80e6-09b3cf793d95", 00:13:12.026 "is_configured": true, 00:13:12.026 "data_offset": 0, 00:13:12.026 "data_size": 65536 00:13:12.026 }, 00:13:12.026 { 00:13:12.026 "name": null, 00:13:12.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.026 "is_configured": false, 00:13:12.026 "data_offset": 0, 00:13:12.026 "data_size": 65536 00:13:12.026 }, 00:13:12.026 { 00:13:12.026 "name": "BaseBdev3", 00:13:12.026 "uuid": "5efb0ffb-9c41-51ed-b203-ab7f0e735f1e", 00:13:12.026 "is_configured": true, 00:13:12.026 "data_offset": 0, 00:13:12.026 "data_size": 65536 00:13:12.026 }, 00:13:12.026 { 00:13:12.026 "name": "BaseBdev4", 00:13:12.026 "uuid": "b2df5308-3f41-554a-aef0-9402aab7e0ac", 00:13:12.026 "is_configured": true, 00:13:12.026 "data_offset": 0, 00:13:12.026 "data_size": 65536 00:13:12.026 } 00:13:12.026 ] 00:13:12.026 }' 00:13:12.026 22:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.026 22:57:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.287 98.12 IOPS, 294.38 MiB/s [2024-11-26T22:57:51.415Z] 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.287 [2024-11-26 22:57:51.352069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.287 [2024-11-26 22:57:51.352105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.287 00:13:12.287 Latency(us) 00:13:12.287 [2024-11-26T22:57:51.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.287 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:12.287 raid_bdev1 : 8.11 97.01 291.02 0.00 0.00 14516.79 280.25 113786.90 00:13:12.287 [2024-11-26T22:57:51.415Z] =================================================================================================================== 00:13:12.287 [2024-11-26T22:57:51.415Z] Total : 97.01 291.02 0.00 0.00 14516.79 280.25 113786.90 00:13:12.287 { 00:13:12.287 "results": [ 00:13:12.287 { 00:13:12.287 "job": "raid_bdev1", 00:13:12.287 "core_mask": "0x1", 00:13:12.287 "workload": "randrw", 00:13:12.287 "percentage": 50, 00:13:12.287 "status": "finished", 00:13:12.287 "queue_depth": 2, 00:13:12.287 "io_size": 3145728, 00:13:12.287 "runtime": 8.112907, 00:13:12.287 "iops": 97.00591908670961, 00:13:12.287 "mibps": 291.01775726012886, 00:13:12.287 "io_failed": 0, 00:13:12.287 "io_timeout": 0, 00:13:12.287 "avg_latency_us": 14516.788530964528, 00:13:12.287 "min_latency_us": 280.25451059008105, 00:13:12.287 "max_latency_us": 113786.90142072692 00:13:12.287 } 00:13:12.287 ], 00:13:12.287 "core_count": 1 00:13:12.287 } 00:13:12.287 [2024-11-26 22:57:51.403091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.287 [2024-11-26 22:57:51.403142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.287 [2024-11-26 22:57:51.403230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.287 [2024-11-26 22:57:51.403240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.287 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.547 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:12.547 /dev/nbd0 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.808 1+0 records in 00:13:12.808 1+0 records out 00:13:12.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413934 s, 9.9 MB/s 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:12.808 /dev/nbd1 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:12.808 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.067 1+0 records in 00:13:13.067 1+0 records out 00:13:13.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419186 s, 9.8 MB/s 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.067 22:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.068 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.327 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:13.327 /dev/nbd1 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.586 1+0 records in 00:13:13.586 1+0 records out 00:13:13.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340774 s, 12.0 MB/s 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.586 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.587 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 90998 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 90998 ']' 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 90998 00:13:13.846 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:14.106 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.106 22:57:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90998 00:13:14.106 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.106 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.106 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90998' 00:13:14.106 killing process with pid 90998 00:13:14.106 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 90998 00:13:14.106 Received shutdown signal, test time was about 9.720739 seconds 00:13:14.106 00:13:14.106 Latency(us) 00:13:14.106 [2024-11-26T22:57:53.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.106 [2024-11-26T22:57:53.234Z] =================================================================================================================== 00:13:14.106 [2024-11-26T22:57:53.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:14.106 [2024-11-26 22:57:53.008000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.106 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 90998 00:13:14.106 [2024-11-26 22:57:53.053404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.366 00:13:14.366 real 0m11.690s 00:13:14.366 user 0m15.143s 00:13:14.366 sys 0m1.875s 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.366 ************************************ 00:13:14.366 END TEST raid_rebuild_test_io 00:13:14.366 ************************************ 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.366 22:57:53 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:14.366 22:57:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.366 22:57:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.366 22:57:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.366 ************************************ 00:13:14.366 START TEST raid_rebuild_test_sb_io 00:13:14.366 ************************************ 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91385 00:13:14.366 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91385 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91385 ']' 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.367 22:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.367 Zero copy mechanism will not be used. 00:13:14.367 [2024-11-26 22:57:53.453920] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:14.367 [2024-11-26 22:57:53.454095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91385 ] 00:13:14.627 [2024-11-26 22:57:53.589787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:14.627 [2024-11-26 22:57:53.631159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.627 [2024-11-26 22:57:53.658325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.627 [2024-11-26 22:57:53.702346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.627 [2024-11-26 22:57:53.702455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.196 BaseBdev1_malloc 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.196 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.196 [2024-11-26 22:57:54.303883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.196 [2024-11-26 22:57:54.304023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.196 [2024-11-26 22:57:54.304064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.196 [2024-11-26 22:57:54.304096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.196 [2024-11-26 22:57:54.306118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.197 [2024-11-26 22:57:54.306192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.197 BaseBdev1 00:13:15.197 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.197 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.197 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.197 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.197 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 BaseBdev2_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-11-26 22:57:54.332713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.457 [2024-11-26 22:57:54.332768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.457 [2024-11-26 22:57:54.332786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.457 [2024-11-26 22:57:54.332795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.457 [2024-11-26 22:57:54.334762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.457 [2024-11-26 22:57:54.334802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.457 BaseBdev2 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 BaseBdev3_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-11-26 22:57:54.361198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:15.457 [2024-11-26 22:57:54.361264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.457 [2024-11-26 22:57:54.361284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:15.457 [2024-11-26 22:57:54.361294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.457 [2024-11-26 22:57:54.363244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.457 BaseBdev3 00:13:15.457 [2024-11-26 22:57:54.363375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 BaseBdev4_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-11-26 22:57:54.401278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:15.457 [2024-11-26 22:57:54.401337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.457 [2024-11-26 22:57:54.401356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:15.457 [2024-11-26 22:57:54.401369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.457 [2024-11-26 22:57:54.403487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.457 [2024-11-26 22:57:54.403589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:15.457 BaseBdev4 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 spare_malloc 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 spare_delay 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-11-26 22:57:54.441815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.457 [2024-11-26 22:57:54.441865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.457 [2024-11-26 22:57:54.441880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:15.457 [2024-11-26 22:57:54.441890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.457 [2024-11-26 22:57:54.443898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.457 [2024-11-26 22:57:54.443937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.457 spare 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:15.457 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 [2024-11-26 22:57:54.453880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.458 [2024-11-26 22:57:54.455727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.458 [2024-11-26 22:57:54.455797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.458 [2024-11-26 22:57:54.455835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.458 [2024-11-26 22:57:54.455989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:15.458 [2024-11-26 22:57:54.456004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.458 [2024-11-26 22:57:54.456237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:15.458 [2024-11-26 22:57:54.456381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:15.458 [2024-11-26 22:57:54.456393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:15.458 [2024-11-26 22:57:54.456521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.458 "name": "raid_bdev1", 00:13:15.458 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:15.458 "strip_size_kb": 0, 00:13:15.458 "state": "online", 00:13:15.458 "raid_level": "raid1", 00:13:15.458 "superblock": true, 00:13:15.458 "num_base_bdevs": 4, 00:13:15.458 "num_base_bdevs_discovered": 4, 00:13:15.458 "num_base_bdevs_operational": 4, 00:13:15.458 "base_bdevs_list": [ 00:13:15.458 { 00:13:15.458 "name": "BaseBdev1", 00:13:15.458 "uuid": "6e4c1d2c-ac5d-594d-be15-b34de00542f3", 00:13:15.458 "is_configured": true, 00:13:15.458 "data_offset": 2048, 00:13:15.458 "data_size": 63488 00:13:15.458 }, 00:13:15.458 { 00:13:15.458 "name": "BaseBdev2", 00:13:15.458 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:15.458 "is_configured": true, 00:13:15.458 "data_offset": 2048, 00:13:15.458 "data_size": 63488 00:13:15.458 }, 00:13:15.458 { 00:13:15.458 "name": "BaseBdev3", 00:13:15.458 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:15.458 "is_configured": true, 00:13:15.458 "data_offset": 2048, 00:13:15.458 "data_size": 63488 00:13:15.458 }, 00:13:15.458 { 00:13:15.458 "name": "BaseBdev4", 00:13:15.458 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:15.458 "is_configured": true, 00:13:15.458 "data_offset": 2048, 00:13:15.458 "data_size": 63488 00:13:15.458 } 00:13:15.458 ] 00:13:15.458 }' 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.458 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.717 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.717 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:15.717 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.717 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.718 [2024-11-26 22:57:54.842211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.978 [2024-11-26 22:57:54.937948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.978 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.978 "name": "raid_bdev1", 00:13:15.978 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:15.978 "strip_size_kb": 0, 00:13:15.978 "state": "online", 00:13:15.978 "raid_level": "raid1", 00:13:15.978 "superblock": true, 00:13:15.978 "num_base_bdevs": 4, 00:13:15.978 "num_base_bdevs_discovered": 3, 00:13:15.978 "num_base_bdevs_operational": 3, 00:13:15.978 "base_bdevs_list": [ 00:13:15.978 { 00:13:15.978 "name": null, 00:13:15.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.978 "is_configured": false, 00:13:15.978 "data_offset": 0, 00:13:15.978 "data_size": 63488 00:13:15.979 }, 00:13:15.979 { 00:13:15.979 "name": "BaseBdev2", 00:13:15.979 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:15.979 "is_configured": true, 00:13:15.979 "data_offset": 2048, 00:13:15.979 "data_size": 63488 00:13:15.979 }, 00:13:15.979 { 00:13:15.979 "name": "BaseBdev3", 00:13:15.979 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:15.979 "is_configured": true, 00:13:15.979 "data_offset": 2048, 00:13:15.979 "data_size": 63488 00:13:15.979 }, 00:13:15.979 { 00:13:15.979 "name": "BaseBdev4", 00:13:15.979 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:15.979 "is_configured": true, 00:13:15.979 "data_offset": 2048, 00:13:15.979 "data_size": 63488 00:13:15.979 } 00:13:15.979 ] 00:13:15.979 }' 00:13:15.979 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.979 22:57:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.979 [2024-11-26 22:57:55.003955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:15.979 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.979 Zero copy mechanism will not be used. 00:13:15.979 Running I/O for 60 seconds... 00:13:16.239 22:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.239 22:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.239 22:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.239 [2024-11-26 22:57:55.351749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.499 22:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.499 22:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.499 [2024-11-26 22:57:55.387525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:16.499 [2024-11-26 22:57:55.389565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.499 [2024-11-26 22:57:55.491143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:16.499 [2024-11-26 22:57:55.491725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:16.499 [2024-11-26 22:57:55.601841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:16.499 [2024-11-26 22:57:55.602163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.069 [2024-11-26 22:57:55.952747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.069 [2024-11-26 22:57:55.954056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.069 185.00 IOPS, 555.00 MiB/s [2024-11-26T22:57:56.197Z] [2024-11-26 22:57:56.185266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.069 [2024-11-26 22:57:56.185943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.329 "name": "raid_bdev1", 00:13:17.329 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:17.329 "strip_size_kb": 0, 00:13:17.329 "state": "online", 00:13:17.329 "raid_level": "raid1", 00:13:17.329 "superblock": true, 00:13:17.329 "num_base_bdevs": 4, 00:13:17.329 "num_base_bdevs_discovered": 4, 00:13:17.329 "num_base_bdevs_operational": 4, 00:13:17.329 "process": { 00:13:17.329 "type": "rebuild", 00:13:17.329 "target": "spare", 00:13:17.329 "progress": { 00:13:17.329 "blocks": 10240, 00:13:17.329 "percent": 16 00:13:17.329 } 00:13:17.329 }, 00:13:17.329 "base_bdevs_list": [ 00:13:17.329 { 00:13:17.329 "name": "spare", 00:13:17.329 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:17.329 "is_configured": true, 00:13:17.329 "data_offset": 2048, 00:13:17.329 "data_size": 63488 00:13:17.329 }, 00:13:17.329 { 00:13:17.329 "name": "BaseBdev2", 00:13:17.329 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:17.329 "is_configured": true, 00:13:17.329 "data_offset": 2048, 00:13:17.329 "data_size": 63488 00:13:17.329 }, 00:13:17.329 { 00:13:17.329 "name": "BaseBdev3", 00:13:17.329 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:17.329 "is_configured": true, 00:13:17.329 "data_offset": 2048, 00:13:17.329 "data_size": 63488 00:13:17.329 }, 00:13:17.329 { 00:13:17.329 "name": "BaseBdev4", 00:13:17.329 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:17.329 "is_configured": true, 00:13:17.329 "data_offset": 2048, 00:13:17.329 "data_size": 63488 00:13:17.329 } 00:13:17.329 ] 00:13:17.329 }' 00:13:17.329 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.589 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.589 [2024-11-26 22:57:56.556860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.589 [2024-11-26 22:57:56.562142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:17.589 [2024-11-26 22:57:56.669148] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.589 [2024-11-26 22:57:56.678585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.589 [2024-11-26 22:57:56.678632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.589 [2024-11-26 22:57:56.678645] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.589 [2024-11-26 22:57:56.695847] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.849 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.850 "name": "raid_bdev1", 00:13:17.850 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:17.850 "strip_size_kb": 0, 00:13:17.850 "state": "online", 00:13:17.850 "raid_level": "raid1", 00:13:17.850 "superblock": true, 00:13:17.850 "num_base_bdevs": 4, 00:13:17.850 "num_base_bdevs_discovered": 3, 00:13:17.850 "num_base_bdevs_operational": 3, 00:13:17.850 "base_bdevs_list": [ 00:13:17.850 { 00:13:17.850 "name": null, 00:13:17.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.850 "is_configured": false, 00:13:17.850 "data_offset": 0, 00:13:17.850 "data_size": 63488 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "name": "BaseBdev2", 00:13:17.850 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "name": "BaseBdev3", 00:13:17.850 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "name": "BaseBdev4", 00:13:17.850 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 } 00:13:17.850 ] 00:13:17.850 }' 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.850 22:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.110 154.00 IOPS, 462.00 MiB/s [2024-11-26T22:57:57.238Z] 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.110 "name": "raid_bdev1", 00:13:18.110 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:18.110 "strip_size_kb": 0, 00:13:18.110 "state": "online", 00:13:18.110 "raid_level": "raid1", 00:13:18.110 "superblock": true, 00:13:18.110 "num_base_bdevs": 4, 00:13:18.110 "num_base_bdevs_discovered": 3, 00:13:18.110 "num_base_bdevs_operational": 3, 00:13:18.110 "base_bdevs_list": [ 00:13:18.110 { 00:13:18.110 "name": null, 00:13:18.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.110 "is_configured": false, 00:13:18.110 "data_offset": 0, 00:13:18.110 "data_size": 63488 00:13:18.110 }, 00:13:18.110 { 00:13:18.110 "name": "BaseBdev2", 00:13:18.110 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:18.110 "is_configured": true, 00:13:18.110 "data_offset": 2048, 00:13:18.110 "data_size": 63488 00:13:18.110 }, 00:13:18.110 { 00:13:18.110 "name": "BaseBdev3", 00:13:18.110 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:18.110 "is_configured": true, 00:13:18.110 "data_offset": 2048, 00:13:18.110 "data_size": 63488 00:13:18.110 }, 00:13:18.110 { 00:13:18.110 "name": "BaseBdev4", 00:13:18.110 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:18.110 "is_configured": true, 00:13:18.110 "data_offset": 2048, 00:13:18.110 "data_size": 63488 00:13:18.110 } 00:13:18.110 ] 00:13:18.110 }' 00:13:18.110 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.370 [2024-11-26 22:57:57.326178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.370 22:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.370 [2024-11-26 22:57:57.367690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:18.370 [2024-11-26 22:57:57.369584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.370 [2024-11-26 22:57:57.484404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.370 [2024-11-26 22:57:57.484823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.631 [2024-11-26 22:57:57.601621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.631 [2024-11-26 22:57:57.601926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.907 [2024-11-26 22:57:57.932125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.187 160.33 IOPS, 481.00 MiB/s [2024-11-26T22:57:58.315Z] [2024-11-26 22:57:58.047304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.187 [2024-11-26 22:57:58.047967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.475 [2024-11-26 22:57:58.399912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.475 "name": "raid_bdev1", 00:13:19.475 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:19.475 "strip_size_kb": 0, 00:13:19.475 "state": "online", 00:13:19.475 "raid_level": "raid1", 00:13:19.475 "superblock": true, 00:13:19.475 "num_base_bdevs": 4, 00:13:19.475 "num_base_bdevs_discovered": 4, 00:13:19.475 "num_base_bdevs_operational": 4, 00:13:19.475 "process": { 00:13:19.475 "type": "rebuild", 00:13:19.475 "target": "spare", 00:13:19.475 "progress": { 00:13:19.475 "blocks": 12288, 00:13:19.475 "percent": 19 00:13:19.475 } 00:13:19.475 }, 00:13:19.475 "base_bdevs_list": [ 00:13:19.475 { 00:13:19.475 "name": "spare", 00:13:19.475 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:19.475 "is_configured": true, 00:13:19.475 "data_offset": 2048, 00:13:19.475 "data_size": 63488 00:13:19.475 }, 00:13:19.475 { 00:13:19.475 "name": "BaseBdev2", 00:13:19.475 "uuid": "a200632e-7fc6-547d-9481-517a12f89d28", 00:13:19.475 "is_configured": true, 00:13:19.475 "data_offset": 2048, 00:13:19.475 "data_size": 63488 00:13:19.475 }, 00:13:19.475 { 00:13:19.475 "name": "BaseBdev3", 00:13:19.475 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:19.475 "is_configured": true, 00:13:19.475 "data_offset": 2048, 00:13:19.475 "data_size": 63488 00:13:19.475 }, 00:13:19.475 { 00:13:19.475 "name": "BaseBdev4", 00:13:19.475 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:19.475 "is_configured": true, 00:13:19.475 "data_offset": 2048, 00:13:19.475 "data_size": 63488 00:13:19.475 } 00:13:19.475 ] 00:13:19.475 }' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.475 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.475 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.475 [2024-11-26 22:57:58.527775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.753 [2024-11-26 22:57:58.627688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.753 [2024-11-26 22:57:58.628395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.753 [2024-11-26 22:57:58.836540] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:19.753 [2024-11-26 22:57:58.836624] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.753 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.013 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.013 "name": "raid_bdev1", 00:13:20.013 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:20.013 "strip_size_kb": 0, 00:13:20.013 "state": "online", 00:13:20.013 "raid_level": "raid1", 00:13:20.013 "superblock": true, 00:13:20.013 "num_base_bdevs": 4, 00:13:20.013 "num_base_bdevs_discovered": 3, 00:13:20.013 "num_base_bdevs_operational": 3, 00:13:20.013 "process": { 00:13:20.013 "type": "rebuild", 00:13:20.013 "target": "spare", 00:13:20.013 "progress": { 00:13:20.013 "blocks": 16384, 00:13:20.013 "percent": 25 00:13:20.013 } 00:13:20.013 }, 00:13:20.013 "base_bdevs_list": [ 00:13:20.013 { 00:13:20.013 "name": "spare", 00:13:20.013 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:20.013 "is_configured": true, 00:13:20.013 "data_offset": 2048, 00:13:20.013 "data_size": 63488 00:13:20.013 }, 00:13:20.013 { 00:13:20.013 "name": null, 00:13:20.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.013 "is_configured": false, 00:13:20.014 "data_offset": 0, 00:13:20.014 "data_size": 63488 00:13:20.014 }, 00:13:20.014 { 00:13:20.014 "name": "BaseBdev3", 00:13:20.014 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:20.014 "is_configured": true, 00:13:20.014 "data_offset": 2048, 00:13:20.014 "data_size": 63488 00:13:20.014 }, 00:13:20.014 { 00:13:20.014 "name": "BaseBdev4", 00:13:20.014 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:20.014 "is_configured": true, 00:13:20.014 "data_offset": 2048, 00:13:20.014 "data_size": 63488 00:13:20.014 } 00:13:20.014 ] 00:13:20.014 }' 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.014 22:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.014 134.00 IOPS, 402.00 MiB/s [2024-11-26T22:57:59.142Z] 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.014 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.014 "name": "raid_bdev1", 00:13:20.014 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:20.014 "strip_size_kb": 0, 00:13:20.014 "state": "online", 00:13:20.014 "raid_level": "raid1", 00:13:20.014 "superblock": true, 00:13:20.014 "num_base_bdevs": 4, 00:13:20.014 "num_base_bdevs_discovered": 3, 00:13:20.014 "num_base_bdevs_operational": 3, 00:13:20.014 "process": { 00:13:20.014 "type": "rebuild", 00:13:20.014 "target": "spare", 00:13:20.014 "progress": { 00:13:20.014 "blocks": 18432, 00:13:20.014 "percent": 29 00:13:20.014 } 00:13:20.014 }, 00:13:20.014 "base_bdevs_list": [ 00:13:20.014 { 00:13:20.014 "name": "spare", 00:13:20.014 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:20.014 "is_configured": true, 00:13:20.014 "data_offset": 2048, 00:13:20.014 "data_size": 63488 00:13:20.014 }, 00:13:20.014 { 00:13:20.014 "name": null, 00:13:20.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.014 "is_configured": false, 00:13:20.014 "data_offset": 0, 00:13:20.014 "data_size": 63488 00:13:20.014 }, 00:13:20.014 { 00:13:20.014 "name": "BaseBdev3", 00:13:20.014 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:20.014 "is_configured": true, 00:13:20.014 "data_offset": 2048, 00:13:20.014 "data_size": 63488 00:13:20.014 }, 00:13:20.014 { 00:13:20.014 "name": "BaseBdev4", 00:13:20.014 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:20.014 "is_configured": true, 00:13:20.014 "data_offset": 2048, 00:13:20.014 "data_size": 63488 00:13:20.014 } 00:13:20.014 ] 00:13:20.014 }' 00:13:20.014 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.014 [2024-11-26 22:57:59.066838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:20.014 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.014 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.272 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.272 22:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.272 [2024-11-26 22:57:59.269536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.273 [2024-11-26 22:57:59.269780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.532 [2024-11-26 22:57:59.491507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:21.102 119.00 IOPS, 357.00 MiB/s [2024-11-26T22:58:00.230Z] 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.102 [2024-11-26 22:58:00.195289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:21.102 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.102 "name": "raid_bdev1", 00:13:21.102 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:21.102 "strip_size_kb": 0, 00:13:21.102 "state": "online", 00:13:21.102 "raid_level": "raid1", 00:13:21.102 "superblock": true, 00:13:21.102 "num_base_bdevs": 4, 00:13:21.102 "num_base_bdevs_discovered": 3, 00:13:21.102 "num_base_bdevs_operational": 3, 00:13:21.102 "process": { 00:13:21.102 "type": "rebuild", 00:13:21.102 "target": "spare", 00:13:21.102 "progress": { 00:13:21.102 "blocks": 36864, 00:13:21.102 "percent": 58 00:13:21.102 } 00:13:21.102 }, 00:13:21.102 "base_bdevs_list": [ 00:13:21.102 { 00:13:21.102 "name": "spare", 00:13:21.102 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:21.102 "is_configured": true, 00:13:21.102 "data_offset": 2048, 00:13:21.102 "data_size": 63488 00:13:21.102 }, 00:13:21.102 { 00:13:21.102 "name": null, 00:13:21.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.102 "is_configured": false, 00:13:21.102 "data_offset": 0, 00:13:21.102 "data_size": 63488 00:13:21.102 }, 00:13:21.102 { 00:13:21.102 "name": "BaseBdev3", 00:13:21.102 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:21.102 "is_configured": true, 00:13:21.102 "data_offset": 2048, 00:13:21.102 "data_size": 63488 00:13:21.102 }, 00:13:21.102 { 00:13:21.102 "name": "BaseBdev4", 00:13:21.103 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:21.103 "is_configured": true, 00:13:21.103 "data_offset": 2048, 00:13:21.103 "data_size": 63488 00:13:21.103 } 00:13:21.103 ] 00:13:21.103 }' 00:13:21.103 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.103 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.103 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.363 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.363 22:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.363 [2024-11-26 22:58:00.408479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:22.195 106.67 IOPS, 320.00 MiB/s [2024-11-26T22:58:01.323Z] 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.195 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.455 "name": "raid_bdev1", 00:13:22.455 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:22.455 "strip_size_kb": 0, 00:13:22.455 "state": "online", 00:13:22.455 "raid_level": "raid1", 00:13:22.455 "superblock": true, 00:13:22.455 "num_base_bdevs": 4, 00:13:22.455 "num_base_bdevs_discovered": 3, 00:13:22.455 "num_base_bdevs_operational": 3, 00:13:22.455 "process": { 00:13:22.455 "type": "rebuild", 00:13:22.455 "target": "spare", 00:13:22.455 "progress": { 00:13:22.455 "blocks": 55296, 00:13:22.455 "percent": 87 00:13:22.455 } 00:13:22.455 }, 00:13:22.455 "base_bdevs_list": [ 00:13:22.455 { 00:13:22.455 "name": "spare", 00:13:22.455 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:22.455 "is_configured": true, 00:13:22.455 "data_offset": 2048, 00:13:22.455 "data_size": 63488 00:13:22.455 }, 00:13:22.455 { 00:13:22.455 "name": null, 00:13:22.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.455 "is_configured": false, 00:13:22.455 "data_offset": 0, 00:13:22.455 "data_size": 63488 00:13:22.455 }, 00:13:22.455 { 00:13:22.455 "name": "BaseBdev3", 00:13:22.455 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:22.455 "is_configured": true, 00:13:22.455 "data_offset": 2048, 00:13:22.455 "data_size": 63488 00:13:22.455 }, 00:13:22.455 { 00:13:22.455 "name": "BaseBdev4", 00:13:22.455 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:22.455 "is_configured": true, 00:13:22.455 "data_offset": 2048, 00:13:22.455 "data_size": 63488 00:13:22.455 } 00:13:22.455 ] 00:13:22.455 }' 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.455 [2024-11-26 22:58:01.389759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.455 22:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.715 [2024-11-26 22:58:01.820183] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:22.974 [2024-11-26 22:58:01.925095] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:22.974 [2024-11-26 22:58:01.928519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.544 95.57 IOPS, 286.71 MiB/s [2024-11-26T22:58:02.672Z] 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.545 "name": "raid_bdev1", 00:13:23.545 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:23.545 "strip_size_kb": 0, 00:13:23.545 "state": "online", 00:13:23.545 "raid_level": "raid1", 00:13:23.545 "superblock": true, 00:13:23.545 "num_base_bdevs": 4, 00:13:23.545 "num_base_bdevs_discovered": 3, 00:13:23.545 "num_base_bdevs_operational": 3, 00:13:23.545 "base_bdevs_list": [ 00:13:23.545 { 00:13:23.545 "name": "spare", 00:13:23.545 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": null, 00:13:23.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.545 "is_configured": false, 00:13:23.545 "data_offset": 0, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": "BaseBdev3", 00:13:23.545 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": "BaseBdev4", 00:13:23.545 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 } 00:13:23.545 ] 00:13:23.545 }' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.545 "name": "raid_bdev1", 00:13:23.545 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:23.545 "strip_size_kb": 0, 00:13:23.545 "state": "online", 00:13:23.545 "raid_level": "raid1", 00:13:23.545 "superblock": true, 00:13:23.545 "num_base_bdevs": 4, 00:13:23.545 "num_base_bdevs_discovered": 3, 00:13:23.545 "num_base_bdevs_operational": 3, 00:13:23.545 "base_bdevs_list": [ 00:13:23.545 { 00:13:23.545 "name": "spare", 00:13:23.545 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": null, 00:13:23.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.545 "is_configured": false, 00:13:23.545 "data_offset": 0, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": "BaseBdev3", 00:13:23.545 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 }, 00:13:23.545 { 00:13:23.545 "name": "BaseBdev4", 00:13:23.545 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:23.545 "is_configured": true, 00:13:23.545 "data_offset": 2048, 00:13:23.545 "data_size": 63488 00:13:23.545 } 00:13:23.545 ] 00:13:23.545 }' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.545 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.806 "name": "raid_bdev1", 00:13:23.806 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:23.806 "strip_size_kb": 0, 00:13:23.806 "state": "online", 00:13:23.806 "raid_level": "raid1", 00:13:23.806 "superblock": true, 00:13:23.806 "num_base_bdevs": 4, 00:13:23.806 "num_base_bdevs_discovered": 3, 00:13:23.806 "num_base_bdevs_operational": 3, 00:13:23.806 "base_bdevs_list": [ 00:13:23.806 { 00:13:23.806 "name": "spare", 00:13:23.806 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:23.806 "is_configured": true, 00:13:23.806 "data_offset": 2048, 00:13:23.806 "data_size": 63488 00:13:23.806 }, 00:13:23.806 { 00:13:23.806 "name": null, 00:13:23.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.806 "is_configured": false, 00:13:23.806 "data_offset": 0, 00:13:23.806 "data_size": 63488 00:13:23.806 }, 00:13:23.806 { 00:13:23.806 "name": "BaseBdev3", 00:13:23.806 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:23.806 "is_configured": true, 00:13:23.806 "data_offset": 2048, 00:13:23.806 "data_size": 63488 00:13:23.806 }, 00:13:23.806 { 00:13:23.806 "name": "BaseBdev4", 00:13:23.806 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:23.806 "is_configured": true, 00:13:23.806 "data_offset": 2048, 00:13:23.806 "data_size": 63488 00:13:23.806 } 00:13:23.806 ] 00:13:23.806 }' 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.806 22:58:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.066 88.12 IOPS, 264.38 MiB/s [2024-11-26T22:58:03.194Z] 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.066 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.066 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.066 [2024-11-26 22:58:03.160600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.066 [2024-11-26 22:58:03.160698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.326 00:13:24.326 Latency(us) 00:13:24.326 [2024-11-26T22:58:03.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.326 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:24.326 raid_bdev1 : 8.21 86.32 258.95 0.00 0.00 16816.62 282.04 116985.73 00:13:24.326 [2024-11-26T22:58:03.454Z] =================================================================================================================== 00:13:24.326 [2024-11-26T22:58:03.454Z] Total : 86.32 258.95 0.00 0.00 16816.62 282.04 116985.73 00:13:24.326 [2024-11-26 22:58:03.223570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.326 [2024-11-26 22:58:03.223661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.326 [2024-11-26 22:58:03.223768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.326 [2024-11-26 22:58:03.223831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:24.326 { 00:13:24.326 "results": [ 00:13:24.326 { 00:13:24.326 "job": "raid_bdev1", 00:13:24.326 "core_mask": "0x1", 00:13:24.326 "workload": "randrw", 00:13:24.326 "percentage": 50, 00:13:24.326 "status": "finished", 00:13:24.326 "queue_depth": 2, 00:13:24.326 "io_size": 3145728, 00:13:24.326 "runtime": 8.213962, 00:13:24.326 "iops": 86.31644509677547, 00:13:24.326 "mibps": 258.9493352903264, 00:13:24.326 "io_failed": 0, 00:13:24.326 "io_timeout": 0, 00:13:24.326 "avg_latency_us": 16816.618872272265, 00:13:24.326 "min_latency_us": 282.03957116708796, 00:13:24.326 "max_latency_us": 116985.72997472326 00:13:24.326 } 00:13:24.326 ], 00:13:24.326 "core_count": 1 00:13:24.326 } 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:24.326 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.327 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:24.587 /dev/nbd0 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.587 1+0 records in 00:13:24.587 1+0 records out 00:13:24.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432043 s, 9.5 MB/s 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.587 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:24.848 /dev/nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.848 1+0 records in 00:13:24.848 1+0 records out 00:13:24.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591665 s, 6.9 MB/s 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.848 22:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.108 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:25.368 /dev/nbd1 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.368 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.368 1+0 records in 00:13:25.368 1+0 records out 00:13:25.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051001 s, 8.0 MB/s 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.369 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.629 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.890 [2024-11-26 22:58:04.823676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.890 [2024-11-26 22:58:04.823741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.890 [2024-11-26 22:58:04.823762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:25.890 [2024-11-26 22:58:04.823772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.890 [2024-11-26 22:58:04.825858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.890 [2024-11-26 22:58:04.825905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.890 [2024-11-26 22:58:04.825990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:25.890 [2024-11-26 22:58:04.826028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.890 [2024-11-26 22:58:04.826147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.890 [2024-11-26 22:58:04.826242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.890 spare 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.890 [2024-11-26 22:58:04.926313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:25.890 [2024-11-26 22:58:04.926390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.890 [2024-11-26 22:58:04.926675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:13:25.890 [2024-11-26 22:58:04.926814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:25.890 [2024-11-26 22:58:04.926824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:25.890 [2024-11-26 22:58:04.926949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.890 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.890 "name": "raid_bdev1", 00:13:25.890 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:25.890 "strip_size_kb": 0, 00:13:25.890 "state": "online", 00:13:25.890 "raid_level": "raid1", 00:13:25.890 "superblock": true, 00:13:25.890 "num_base_bdevs": 4, 00:13:25.890 "num_base_bdevs_discovered": 3, 00:13:25.890 "num_base_bdevs_operational": 3, 00:13:25.890 "base_bdevs_list": [ 00:13:25.890 { 00:13:25.890 "name": "spare", 00:13:25.890 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:25.890 "is_configured": true, 00:13:25.890 "data_offset": 2048, 00:13:25.890 "data_size": 63488 00:13:25.890 }, 00:13:25.890 { 00:13:25.890 "name": null, 00:13:25.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.890 "is_configured": false, 00:13:25.890 "data_offset": 2048, 00:13:25.890 "data_size": 63488 00:13:25.890 }, 00:13:25.890 { 00:13:25.890 "name": "BaseBdev3", 00:13:25.890 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:25.890 "is_configured": true, 00:13:25.890 "data_offset": 2048, 00:13:25.890 "data_size": 63488 00:13:25.890 }, 00:13:25.890 { 00:13:25.890 "name": "BaseBdev4", 00:13:25.890 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:25.890 "is_configured": true, 00:13:25.890 "data_offset": 2048, 00:13:25.891 "data_size": 63488 00:13:25.891 } 00:13:25.891 ] 00:13:25.891 }' 00:13:25.891 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.891 22:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.461 "name": "raid_bdev1", 00:13:26.461 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:26.461 "strip_size_kb": 0, 00:13:26.461 "state": "online", 00:13:26.461 "raid_level": "raid1", 00:13:26.461 "superblock": true, 00:13:26.461 "num_base_bdevs": 4, 00:13:26.461 "num_base_bdevs_discovered": 3, 00:13:26.461 "num_base_bdevs_operational": 3, 00:13:26.461 "base_bdevs_list": [ 00:13:26.461 { 00:13:26.461 "name": "spare", 00:13:26.461 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:26.461 "is_configured": true, 00:13:26.461 "data_offset": 2048, 00:13:26.461 "data_size": 63488 00:13:26.461 }, 00:13:26.461 { 00:13:26.461 "name": null, 00:13:26.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.461 "is_configured": false, 00:13:26.461 "data_offset": 2048, 00:13:26.461 "data_size": 63488 00:13:26.461 }, 00:13:26.461 { 00:13:26.461 "name": "BaseBdev3", 00:13:26.461 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:26.461 "is_configured": true, 00:13:26.461 "data_offset": 2048, 00:13:26.461 "data_size": 63488 00:13:26.461 }, 00:13:26.461 { 00:13:26.461 "name": "BaseBdev4", 00:13:26.461 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:26.461 "is_configured": true, 00:13:26.461 "data_offset": 2048, 00:13:26.461 "data_size": 63488 00:13:26.461 } 00:13:26.461 ] 00:13:26.461 }' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.461 [2024-11-26 22:58:05.539951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.461 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.722 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.722 "name": "raid_bdev1", 00:13:26.722 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:26.722 "strip_size_kb": 0, 00:13:26.722 "state": "online", 00:13:26.722 "raid_level": "raid1", 00:13:26.722 "superblock": true, 00:13:26.722 "num_base_bdevs": 4, 00:13:26.722 "num_base_bdevs_discovered": 2, 00:13:26.722 "num_base_bdevs_operational": 2, 00:13:26.722 "base_bdevs_list": [ 00:13:26.722 { 00:13:26.722 "name": null, 00:13:26.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.722 "is_configured": false, 00:13:26.722 "data_offset": 0, 00:13:26.722 "data_size": 63488 00:13:26.722 }, 00:13:26.722 { 00:13:26.722 "name": null, 00:13:26.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.722 "is_configured": false, 00:13:26.722 "data_offset": 2048, 00:13:26.722 "data_size": 63488 00:13:26.722 }, 00:13:26.722 { 00:13:26.722 "name": "BaseBdev3", 00:13:26.722 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:26.722 "is_configured": true, 00:13:26.722 "data_offset": 2048, 00:13:26.722 "data_size": 63488 00:13:26.722 }, 00:13:26.722 { 00:13:26.722 "name": "BaseBdev4", 00:13:26.722 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:26.722 "is_configured": true, 00:13:26.722 "data_offset": 2048, 00:13:26.722 "data_size": 63488 00:13:26.722 } 00:13:26.722 ] 00:13:26.722 }' 00:13:26.722 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.722 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.982 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.982 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.982 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.982 [2024-11-26 22:58:05.984136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.982 [2024-11-26 22:58:05.984332] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:26.982 [2024-11-26 22:58:05.984415] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.982 [2024-11-26 22:58:05.984469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.982 [2024-11-26 22:58:05.988941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:13:26.982 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.982 22:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:26.982 [2024-11-26 22:58:05.990827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.921 22:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.921 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.921 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.921 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.182 "name": "raid_bdev1", 00:13:28.182 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:28.182 "strip_size_kb": 0, 00:13:28.182 "state": "online", 00:13:28.182 "raid_level": "raid1", 00:13:28.182 "superblock": true, 00:13:28.182 "num_base_bdevs": 4, 00:13:28.182 "num_base_bdevs_discovered": 3, 00:13:28.182 "num_base_bdevs_operational": 3, 00:13:28.182 "process": { 00:13:28.182 "type": "rebuild", 00:13:28.182 "target": "spare", 00:13:28.182 "progress": { 00:13:28.182 "blocks": 20480, 00:13:28.182 "percent": 32 00:13:28.182 } 00:13:28.182 }, 00:13:28.182 "base_bdevs_list": [ 00:13:28.182 { 00:13:28.182 "name": "spare", 00:13:28.182 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:28.182 "is_configured": true, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": null, 00:13:28.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.182 "is_configured": false, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": "BaseBdev3", 00:13:28.182 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:28.182 "is_configured": true, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": "BaseBdev4", 00:13:28.182 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:28.182 "is_configured": true, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 } 00:13:28.182 ] 00:13:28.182 }' 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.182 [2024-11-26 22:58:07.153767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.182 [2024-11-26 22:58:07.196986] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.182 [2024-11-26 22:58:07.197049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.182 [2024-11-26 22:58:07.197064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.182 [2024-11-26 22:58:07.197073] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.182 "name": "raid_bdev1", 00:13:28.182 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:28.182 "strip_size_kb": 0, 00:13:28.182 "state": "online", 00:13:28.182 "raid_level": "raid1", 00:13:28.182 "superblock": true, 00:13:28.182 "num_base_bdevs": 4, 00:13:28.182 "num_base_bdevs_discovered": 2, 00:13:28.182 "num_base_bdevs_operational": 2, 00:13:28.182 "base_bdevs_list": [ 00:13:28.182 { 00:13:28.182 "name": null, 00:13:28.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.182 "is_configured": false, 00:13:28.182 "data_offset": 0, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": null, 00:13:28.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.182 "is_configured": false, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": "BaseBdev3", 00:13:28.182 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:28.182 "is_configured": true, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 }, 00:13:28.182 { 00:13:28.182 "name": "BaseBdev4", 00:13:28.182 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:28.182 "is_configured": true, 00:13:28.182 "data_offset": 2048, 00:13:28.182 "data_size": 63488 00:13:28.182 } 00:13:28.182 ] 00:13:28.182 }' 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.182 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.751 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.751 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.751 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.751 [2024-11-26 22:58:07.657338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.751 [2024-11-26 22:58:07.657440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.751 [2024-11-26 22:58:07.657479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:28.751 [2024-11-26 22:58:07.657509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.751 [2024-11-26 22:58:07.657936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.751 [2024-11-26 22:58:07.657997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.751 [2024-11-26 22:58:07.658102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:28.751 [2024-11-26 22:58:07.658144] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:28.751 [2024-11-26 22:58:07.658181] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:28.751 [2024-11-26 22:58:07.658238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.751 [2024-11-26 22:58:07.662091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:13:28.751 spare 00:13:28.751 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.751 22:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:28.751 [2024-11-26 22:58:07.663992] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.691 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.691 "name": "raid_bdev1", 00:13:29.691 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:29.691 "strip_size_kb": 0, 00:13:29.691 "state": "online", 00:13:29.691 "raid_level": "raid1", 00:13:29.691 "superblock": true, 00:13:29.691 "num_base_bdevs": 4, 00:13:29.691 "num_base_bdevs_discovered": 3, 00:13:29.691 "num_base_bdevs_operational": 3, 00:13:29.691 "process": { 00:13:29.691 "type": "rebuild", 00:13:29.691 "target": "spare", 00:13:29.691 "progress": { 00:13:29.691 "blocks": 20480, 00:13:29.691 "percent": 32 00:13:29.691 } 00:13:29.691 }, 00:13:29.691 "base_bdevs_list": [ 00:13:29.691 { 00:13:29.691 "name": "spare", 00:13:29.691 "uuid": "1e260377-0de9-5cd3-afa0-af1ab0126f4e", 00:13:29.691 "is_configured": true, 00:13:29.691 "data_offset": 2048, 00:13:29.692 "data_size": 63488 00:13:29.692 }, 00:13:29.692 { 00:13:29.692 "name": null, 00:13:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.692 "is_configured": false, 00:13:29.692 "data_offset": 2048, 00:13:29.692 "data_size": 63488 00:13:29.692 }, 00:13:29.692 { 00:13:29.692 "name": "BaseBdev3", 00:13:29.692 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:29.692 "is_configured": true, 00:13:29.692 "data_offset": 2048, 00:13:29.692 "data_size": 63488 00:13:29.692 }, 00:13:29.692 { 00:13:29.692 "name": "BaseBdev4", 00:13:29.692 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:29.692 "is_configured": true, 00:13:29.692 "data_offset": 2048, 00:13:29.692 "data_size": 63488 00:13:29.692 } 00:13:29.692 ] 00:13:29.692 }' 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.692 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.692 [2024-11-26 22:58:08.805345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.951 [2024-11-26 22:58:08.870194] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.951 [2024-11-26 22:58:08.870247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.951 [2024-11-26 22:58:08.870275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.951 [2024-11-26 22:58:08.870281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.951 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.951 "name": "raid_bdev1", 00:13:29.951 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:29.951 "strip_size_kb": 0, 00:13:29.951 "state": "online", 00:13:29.951 "raid_level": "raid1", 00:13:29.951 "superblock": true, 00:13:29.951 "num_base_bdevs": 4, 00:13:29.951 "num_base_bdevs_discovered": 2, 00:13:29.951 "num_base_bdevs_operational": 2, 00:13:29.951 "base_bdevs_list": [ 00:13:29.952 { 00:13:29.952 "name": null, 00:13:29.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.952 "is_configured": false, 00:13:29.952 "data_offset": 0, 00:13:29.952 "data_size": 63488 00:13:29.952 }, 00:13:29.952 { 00:13:29.952 "name": null, 00:13:29.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.952 "is_configured": false, 00:13:29.952 "data_offset": 2048, 00:13:29.952 "data_size": 63488 00:13:29.952 }, 00:13:29.952 { 00:13:29.952 "name": "BaseBdev3", 00:13:29.952 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:29.952 "is_configured": true, 00:13:29.952 "data_offset": 2048, 00:13:29.952 "data_size": 63488 00:13:29.952 }, 00:13:29.952 { 00:13:29.952 "name": "BaseBdev4", 00:13:29.952 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:29.952 "is_configured": true, 00:13:29.952 "data_offset": 2048, 00:13:29.952 "data_size": 63488 00:13:29.952 } 00:13:29.952 ] 00:13:29.952 }' 00:13:29.952 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.952 22:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.210 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.468 "name": "raid_bdev1", 00:13:30.468 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:30.468 "strip_size_kb": 0, 00:13:30.468 "state": "online", 00:13:30.468 "raid_level": "raid1", 00:13:30.468 "superblock": true, 00:13:30.468 "num_base_bdevs": 4, 00:13:30.468 "num_base_bdevs_discovered": 2, 00:13:30.468 "num_base_bdevs_operational": 2, 00:13:30.468 "base_bdevs_list": [ 00:13:30.468 { 00:13:30.468 "name": null, 00:13:30.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.468 "is_configured": false, 00:13:30.468 "data_offset": 0, 00:13:30.468 "data_size": 63488 00:13:30.468 }, 00:13:30.468 { 00:13:30.468 "name": null, 00:13:30.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.468 "is_configured": false, 00:13:30.468 "data_offset": 2048, 00:13:30.468 "data_size": 63488 00:13:30.468 }, 00:13:30.468 { 00:13:30.468 "name": "BaseBdev3", 00:13:30.468 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:30.468 "is_configured": true, 00:13:30.468 "data_offset": 2048, 00:13:30.468 "data_size": 63488 00:13:30.468 }, 00:13:30.468 { 00:13:30.468 "name": "BaseBdev4", 00:13:30.468 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:30.468 "is_configured": true, 00:13:30.468 "data_offset": 2048, 00:13:30.468 "data_size": 63488 00:13:30.468 } 00:13:30.468 ] 00:13:30.468 }' 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.468 [2024-11-26 22:58:09.482396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:30.468 [2024-11-26 22:58:09.482458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.468 [2024-11-26 22:58:09.482479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:30.468 [2024-11-26 22:58:09.482487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.468 [2024-11-26 22:58:09.482900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.468 [2024-11-26 22:58:09.482918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.468 [2024-11-26 22:58:09.482989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:30.468 [2024-11-26 22:58:09.483009] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:30.468 [2024-11-26 22:58:09.483021] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.468 [2024-11-26 22:58:09.483031] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:30.468 BaseBdev1 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.468 22:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.407 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.667 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.667 "name": "raid_bdev1", 00:13:31.667 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:31.667 "strip_size_kb": 0, 00:13:31.667 "state": "online", 00:13:31.667 "raid_level": "raid1", 00:13:31.667 "superblock": true, 00:13:31.667 "num_base_bdevs": 4, 00:13:31.667 "num_base_bdevs_discovered": 2, 00:13:31.667 "num_base_bdevs_operational": 2, 00:13:31.667 "base_bdevs_list": [ 00:13:31.667 { 00:13:31.667 "name": null, 00:13:31.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.667 "is_configured": false, 00:13:31.667 "data_offset": 0, 00:13:31.667 "data_size": 63488 00:13:31.667 }, 00:13:31.667 { 00:13:31.667 "name": null, 00:13:31.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.667 "is_configured": false, 00:13:31.667 "data_offset": 2048, 00:13:31.667 "data_size": 63488 00:13:31.667 }, 00:13:31.667 { 00:13:31.667 "name": "BaseBdev3", 00:13:31.667 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:31.667 "is_configured": true, 00:13:31.667 "data_offset": 2048, 00:13:31.667 "data_size": 63488 00:13:31.667 }, 00:13:31.667 { 00:13:31.667 "name": "BaseBdev4", 00:13:31.667 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:31.667 "is_configured": true, 00:13:31.667 "data_offset": 2048, 00:13:31.667 "data_size": 63488 00:13:31.667 } 00:13:31.667 ] 00:13:31.667 }' 00:13:31.667 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.667 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.927 "name": "raid_bdev1", 00:13:31.927 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:31.927 "strip_size_kb": 0, 00:13:31.927 "state": "online", 00:13:31.927 "raid_level": "raid1", 00:13:31.927 "superblock": true, 00:13:31.927 "num_base_bdevs": 4, 00:13:31.927 "num_base_bdevs_discovered": 2, 00:13:31.927 "num_base_bdevs_operational": 2, 00:13:31.927 "base_bdevs_list": [ 00:13:31.927 { 00:13:31.927 "name": null, 00:13:31.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.927 "is_configured": false, 00:13:31.927 "data_offset": 0, 00:13:31.927 "data_size": 63488 00:13:31.927 }, 00:13:31.927 { 00:13:31.927 "name": null, 00:13:31.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.927 "is_configured": false, 00:13:31.927 "data_offset": 2048, 00:13:31.927 "data_size": 63488 00:13:31.927 }, 00:13:31.927 { 00:13:31.927 "name": "BaseBdev3", 00:13:31.927 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:31.927 "is_configured": true, 00:13:31.927 "data_offset": 2048, 00:13:31.927 "data_size": 63488 00:13:31.927 }, 00:13:31.927 { 00:13:31.927 "name": "BaseBdev4", 00:13:31.927 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:31.927 "is_configured": true, 00:13:31.927 "data_offset": 2048, 00:13:31.927 "data_size": 63488 00:13:31.927 } 00:13:31.927 ] 00:13:31.927 }' 00:13:31.927 22:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.927 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.927 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.187 [2024-11-26 22:58:11.087007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.187 [2024-11-26 22:58:11.087179] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:32.187 [2024-11-26 22:58:11.087199] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:32.187 request: 00:13:32.187 { 00:13:32.187 "base_bdev": "BaseBdev1", 00:13:32.187 "raid_bdev": "raid_bdev1", 00:13:32.187 "method": "bdev_raid_add_base_bdev", 00:13:32.187 "req_id": 1 00:13:32.187 } 00:13:32.187 Got JSON-RPC error response 00:13:32.187 response: 00:13:32.187 { 00:13:32.187 "code": -22, 00:13:32.187 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:32.187 } 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.187 22:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.126 "name": "raid_bdev1", 00:13:33.126 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:33.126 "strip_size_kb": 0, 00:13:33.126 "state": "online", 00:13:33.126 "raid_level": "raid1", 00:13:33.126 "superblock": true, 00:13:33.126 "num_base_bdevs": 4, 00:13:33.126 "num_base_bdevs_discovered": 2, 00:13:33.126 "num_base_bdevs_operational": 2, 00:13:33.126 "base_bdevs_list": [ 00:13:33.126 { 00:13:33.126 "name": null, 00:13:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.126 "is_configured": false, 00:13:33.126 "data_offset": 0, 00:13:33.126 "data_size": 63488 00:13:33.126 }, 00:13:33.126 { 00:13:33.126 "name": null, 00:13:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.126 "is_configured": false, 00:13:33.126 "data_offset": 2048, 00:13:33.126 "data_size": 63488 00:13:33.126 }, 00:13:33.126 { 00:13:33.126 "name": "BaseBdev3", 00:13:33.126 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:33.126 "is_configured": true, 00:13:33.126 "data_offset": 2048, 00:13:33.126 "data_size": 63488 00:13:33.126 }, 00:13:33.126 { 00:13:33.126 "name": "BaseBdev4", 00:13:33.126 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:33.126 "is_configured": true, 00:13:33.126 "data_offset": 2048, 00:13:33.126 "data_size": 63488 00:13:33.126 } 00:13:33.126 ] 00:13:33.126 }' 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.126 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.696 "name": "raid_bdev1", 00:13:33.696 "uuid": "d497f62e-5926-4302-ace6-75c053bcd4d3", 00:13:33.696 "strip_size_kb": 0, 00:13:33.696 "state": "online", 00:13:33.696 "raid_level": "raid1", 00:13:33.696 "superblock": true, 00:13:33.696 "num_base_bdevs": 4, 00:13:33.696 "num_base_bdevs_discovered": 2, 00:13:33.696 "num_base_bdevs_operational": 2, 00:13:33.696 "base_bdevs_list": [ 00:13:33.696 { 00:13:33.696 "name": null, 00:13:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.696 "is_configured": false, 00:13:33.696 "data_offset": 0, 00:13:33.696 "data_size": 63488 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": null, 00:13:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.696 "is_configured": false, 00:13:33.696 "data_offset": 2048, 00:13:33.696 "data_size": 63488 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": "BaseBdev3", 00:13:33.696 "uuid": "5580660e-ff30-5f73-b39d-7fcb64f10aa0", 00:13:33.696 "is_configured": true, 00:13:33.696 "data_offset": 2048, 00:13:33.696 "data_size": 63488 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": "BaseBdev4", 00:13:33.696 "uuid": "311b25f5-5e49-5eb1-a6d9-5ac414a16b74", 00:13:33.696 "is_configured": true, 00:13:33.696 "data_offset": 2048, 00:13:33.696 "data_size": 63488 00:13:33.696 } 00:13:33.696 ] 00:13:33.696 }' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91385 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91385 ']' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91385 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.696 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91385 00:13:33.697 killing process with pid 91385 00:13:33.697 Received shutdown signal, test time was about 17.765615 seconds 00:13:33.697 00:13:33.697 Latency(us) 00:13:33.697 [2024-11-26T22:58:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.697 [2024-11-26T22:58:12.825Z] =================================================================================================================== 00:13:33.697 [2024-11-26T22:58:12.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.697 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.697 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.697 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91385' 00:13:33.697 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91385 00:13:33.697 [2024-11-26 22:58:12.772891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.697 [2024-11-26 22:58:12.772992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.697 [2024-11-26 22:58:12.773050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.697 [2024-11-26 22:58:12.773061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:33.697 22:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91385 00:13:33.697 [2024-11-26 22:58:12.819310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.957 ************************************ 00:13:33.957 END TEST raid_rebuild_test_sb_io 00:13:33.957 ************************************ 00:13:33.957 22:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.957 00:13:33.957 real 0m19.684s 00:13:33.957 user 0m26.143s 00:13:33.957 sys 0m2.600s 00:13:33.957 22:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.957 22:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 22:58:13 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:34.217 22:58:13 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:34.217 22:58:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:34.217 22:58:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.217 22:58:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 ************************************ 00:13:34.217 START TEST raid5f_state_function_test 00:13:34.217 ************************************ 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92093 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92093' 00:13:34.217 Process raid pid: 92093 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92093 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 92093 ']' 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.217 22:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 [2024-11-26 22:58:13.229452] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:34.217 [2024-11-26 22:58:13.229668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.477 [2024-11-26 22:58:13.372027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:34.477 [2024-11-26 22:58:13.408738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.477 [2024-11-26 22:58:13.434328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.477 [2024-11-26 22:58:13.477697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.477 [2024-11-26 22:58:13.477730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 [2024-11-26 22:58:14.038283] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.045 [2024-11-26 22:58:14.038335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.045 [2024-11-26 22:58:14.038356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.045 [2024-11-26 22:58:14.038363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.045 [2024-11-26 22:58:14.038375] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.045 [2024-11-26 22:58:14.038382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.045 "name": "Existed_Raid", 00:13:35.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.045 "strip_size_kb": 64, 00:13:35.045 "state": "configuring", 00:13:35.045 "raid_level": "raid5f", 00:13:35.045 "superblock": false, 00:13:35.045 "num_base_bdevs": 3, 00:13:35.045 "num_base_bdevs_discovered": 0, 00:13:35.045 "num_base_bdevs_operational": 3, 00:13:35.045 "base_bdevs_list": [ 00:13:35.045 { 00:13:35.045 "name": "BaseBdev1", 00:13:35.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.045 "is_configured": false, 00:13:35.045 "data_offset": 0, 00:13:35.045 "data_size": 0 00:13:35.045 }, 00:13:35.045 { 00:13:35.045 "name": "BaseBdev2", 00:13:35.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.045 "is_configured": false, 00:13:35.045 "data_offset": 0, 00:13:35.045 "data_size": 0 00:13:35.045 }, 00:13:35.045 { 00:13:35.045 "name": "BaseBdev3", 00:13:35.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.045 "is_configured": false, 00:13:35.045 "data_offset": 0, 00:13:35.045 "data_size": 0 00:13:35.045 } 00:13:35.045 ] 00:13:35.045 }' 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.045 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.615 [2024-11-26 22:58:14.494290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.615 [2024-11-26 22:58:14.494369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.615 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.615 [2024-11-26 22:58:14.502329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.615 [2024-11-26 22:58:14.502407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.615 [2024-11-26 22:58:14.502453] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.615 [2024-11-26 22:58:14.502473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.616 [2024-11-26 22:58:14.502491] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.616 [2024-11-26 22:58:14.502511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 [2024-11-26 22:58:14.519029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.616 BaseBdev1 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 [ 00:13:35.616 { 00:13:35.616 "name": "BaseBdev1", 00:13:35.616 "aliases": [ 00:13:35.616 "f62237fd-cc39-42a8-baaa-e7b54228e5d3" 00:13:35.616 ], 00:13:35.616 "product_name": "Malloc disk", 00:13:35.616 "block_size": 512, 00:13:35.616 "num_blocks": 65536, 00:13:35.616 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:35.616 "assigned_rate_limits": { 00:13:35.616 "rw_ios_per_sec": 0, 00:13:35.616 "rw_mbytes_per_sec": 0, 00:13:35.616 "r_mbytes_per_sec": 0, 00:13:35.616 "w_mbytes_per_sec": 0 00:13:35.616 }, 00:13:35.616 "claimed": true, 00:13:35.616 "claim_type": "exclusive_write", 00:13:35.616 "zoned": false, 00:13:35.616 "supported_io_types": { 00:13:35.616 "read": true, 00:13:35.616 "write": true, 00:13:35.616 "unmap": true, 00:13:35.616 "flush": true, 00:13:35.616 "reset": true, 00:13:35.616 "nvme_admin": false, 00:13:35.616 "nvme_io": false, 00:13:35.616 "nvme_io_md": false, 00:13:35.616 "write_zeroes": true, 00:13:35.616 "zcopy": true, 00:13:35.616 "get_zone_info": false, 00:13:35.616 "zone_management": false, 00:13:35.616 "zone_append": false, 00:13:35.616 "compare": false, 00:13:35.616 "compare_and_write": false, 00:13:35.616 "abort": true, 00:13:35.616 "seek_hole": false, 00:13:35.616 "seek_data": false, 00:13:35.616 "copy": true, 00:13:35.616 "nvme_iov_md": false 00:13:35.616 }, 00:13:35.616 "memory_domains": [ 00:13:35.616 { 00:13:35.616 "dma_device_id": "system", 00:13:35.616 "dma_device_type": 1 00:13:35.616 }, 00:13:35.616 { 00:13:35.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.616 "dma_device_type": 2 00:13:35.616 } 00:13:35.616 ], 00:13:35.616 "driver_specific": {} 00:13:35.616 } 00:13:35.616 ] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.616 "name": "Existed_Raid", 00:13:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.616 "strip_size_kb": 64, 00:13:35.616 "state": "configuring", 00:13:35.616 "raid_level": "raid5f", 00:13:35.616 "superblock": false, 00:13:35.616 "num_base_bdevs": 3, 00:13:35.616 "num_base_bdevs_discovered": 1, 00:13:35.616 "num_base_bdevs_operational": 3, 00:13:35.616 "base_bdevs_list": [ 00:13:35.616 { 00:13:35.616 "name": "BaseBdev1", 00:13:35.616 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:35.616 "is_configured": true, 00:13:35.616 "data_offset": 0, 00:13:35.616 "data_size": 65536 00:13:35.616 }, 00:13:35.616 { 00:13:35.616 "name": "BaseBdev2", 00:13:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.616 "is_configured": false, 00:13:35.616 "data_offset": 0, 00:13:35.616 "data_size": 0 00:13:35.616 }, 00:13:35.616 { 00:13:35.616 "name": "BaseBdev3", 00:13:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.616 "is_configured": false, 00:13:35.616 "data_offset": 0, 00:13:35.616 "data_size": 0 00:13:35.616 } 00:13:35.616 ] 00:13:35.616 }' 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.616 22:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.187 [2024-11-26 22:58:15.047166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.187 [2024-11-26 22:58:15.047272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.187 [2024-11-26 22:58:15.059212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.187 [2024-11-26 22:58:15.060981] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.187 [2024-11-26 22:58:15.061018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.187 [2024-11-26 22:58:15.061030] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.187 [2024-11-26 22:58:15.061037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.187 "name": "Existed_Raid", 00:13:36.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.187 "strip_size_kb": 64, 00:13:36.187 "state": "configuring", 00:13:36.187 "raid_level": "raid5f", 00:13:36.187 "superblock": false, 00:13:36.187 "num_base_bdevs": 3, 00:13:36.187 "num_base_bdevs_discovered": 1, 00:13:36.187 "num_base_bdevs_operational": 3, 00:13:36.187 "base_bdevs_list": [ 00:13:36.187 { 00:13:36.187 "name": "BaseBdev1", 00:13:36.187 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:36.187 "is_configured": true, 00:13:36.187 "data_offset": 0, 00:13:36.187 "data_size": 65536 00:13:36.187 }, 00:13:36.187 { 00:13:36.187 "name": "BaseBdev2", 00:13:36.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.187 "is_configured": false, 00:13:36.187 "data_offset": 0, 00:13:36.187 "data_size": 0 00:13:36.187 }, 00:13:36.187 { 00:13:36.187 "name": "BaseBdev3", 00:13:36.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.187 "is_configured": false, 00:13:36.187 "data_offset": 0, 00:13:36.187 "data_size": 0 00:13:36.187 } 00:13:36.187 ] 00:13:36.187 }' 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.187 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.447 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 [2024-11-26 22:58:15.518385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.448 BaseBdev2 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 [ 00:13:36.448 { 00:13:36.448 "name": "BaseBdev2", 00:13:36.448 "aliases": [ 00:13:36.448 "77af1991-f849-484a-9197-a953b49a4cae" 00:13:36.448 ], 00:13:36.448 "product_name": "Malloc disk", 00:13:36.448 "block_size": 512, 00:13:36.448 "num_blocks": 65536, 00:13:36.448 "uuid": "77af1991-f849-484a-9197-a953b49a4cae", 00:13:36.448 "assigned_rate_limits": { 00:13:36.448 "rw_ios_per_sec": 0, 00:13:36.448 "rw_mbytes_per_sec": 0, 00:13:36.448 "r_mbytes_per_sec": 0, 00:13:36.448 "w_mbytes_per_sec": 0 00:13:36.448 }, 00:13:36.448 "claimed": true, 00:13:36.448 "claim_type": "exclusive_write", 00:13:36.448 "zoned": false, 00:13:36.448 "supported_io_types": { 00:13:36.448 "read": true, 00:13:36.448 "write": true, 00:13:36.448 "unmap": true, 00:13:36.448 "flush": true, 00:13:36.448 "reset": true, 00:13:36.448 "nvme_admin": false, 00:13:36.448 "nvme_io": false, 00:13:36.448 "nvme_io_md": false, 00:13:36.448 "write_zeroes": true, 00:13:36.448 "zcopy": true, 00:13:36.448 "get_zone_info": false, 00:13:36.448 "zone_management": false, 00:13:36.448 "zone_append": false, 00:13:36.448 "compare": false, 00:13:36.448 "compare_and_write": false, 00:13:36.448 "abort": true, 00:13:36.448 "seek_hole": false, 00:13:36.448 "seek_data": false, 00:13:36.448 "copy": true, 00:13:36.448 "nvme_iov_md": false 00:13:36.448 }, 00:13:36.448 "memory_domains": [ 00:13:36.448 { 00:13:36.448 "dma_device_id": "system", 00:13:36.448 "dma_device_type": 1 00:13:36.448 }, 00:13:36.448 { 00:13:36.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.448 "dma_device_type": 2 00:13:36.448 } 00:13:36.448 ], 00:13:36.448 "driver_specific": {} 00:13:36.448 } 00:13:36.448 ] 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.448 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.708 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.708 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.708 "name": "Existed_Raid", 00:13:36.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.708 "strip_size_kb": 64, 00:13:36.708 "state": "configuring", 00:13:36.708 "raid_level": "raid5f", 00:13:36.708 "superblock": false, 00:13:36.708 "num_base_bdevs": 3, 00:13:36.708 "num_base_bdevs_discovered": 2, 00:13:36.708 "num_base_bdevs_operational": 3, 00:13:36.708 "base_bdevs_list": [ 00:13:36.708 { 00:13:36.708 "name": "BaseBdev1", 00:13:36.708 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:36.708 "is_configured": true, 00:13:36.708 "data_offset": 0, 00:13:36.708 "data_size": 65536 00:13:36.708 }, 00:13:36.708 { 00:13:36.708 "name": "BaseBdev2", 00:13:36.708 "uuid": "77af1991-f849-484a-9197-a953b49a4cae", 00:13:36.708 "is_configured": true, 00:13:36.708 "data_offset": 0, 00:13:36.708 "data_size": 65536 00:13:36.708 }, 00:13:36.708 { 00:13:36.708 "name": "BaseBdev3", 00:13:36.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.708 "is_configured": false, 00:13:36.708 "data_offset": 0, 00:13:36.708 "data_size": 0 00:13:36.708 } 00:13:36.708 ] 00:13:36.708 }' 00:13:36.708 22:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.708 22:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.969 [2024-11-26 22:58:16.067079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.969 [2024-11-26 22:58:16.067238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:36.969 [2024-11-26 22:58:16.067306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:36.969 [2024-11-26 22:58:16.068239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:36.969 [2024-11-26 22:58:16.069746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:36.969 [2024-11-26 22:58:16.069812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:13:36.969 [2024-11-26 22:58:16.070402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.969 BaseBdev3 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.969 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.229 [ 00:13:37.229 { 00:13:37.229 "name": "BaseBdev3", 00:13:37.229 "aliases": [ 00:13:37.229 "cc336cb4-0ec7-4b5b-8249-85b4b4da2445" 00:13:37.229 ], 00:13:37.229 "product_name": "Malloc disk", 00:13:37.229 "block_size": 512, 00:13:37.229 "num_blocks": 65536, 00:13:37.229 "uuid": "cc336cb4-0ec7-4b5b-8249-85b4b4da2445", 00:13:37.229 "assigned_rate_limits": { 00:13:37.229 "rw_ios_per_sec": 0, 00:13:37.229 "rw_mbytes_per_sec": 0, 00:13:37.229 "r_mbytes_per_sec": 0, 00:13:37.229 "w_mbytes_per_sec": 0 00:13:37.229 }, 00:13:37.229 "claimed": true, 00:13:37.229 "claim_type": "exclusive_write", 00:13:37.229 "zoned": false, 00:13:37.229 "supported_io_types": { 00:13:37.229 "read": true, 00:13:37.230 "write": true, 00:13:37.230 "unmap": true, 00:13:37.230 "flush": true, 00:13:37.230 "reset": true, 00:13:37.230 "nvme_admin": false, 00:13:37.230 "nvme_io": false, 00:13:37.230 "nvme_io_md": false, 00:13:37.230 "write_zeroes": true, 00:13:37.230 "zcopy": true, 00:13:37.230 "get_zone_info": false, 00:13:37.230 "zone_management": false, 00:13:37.230 "zone_append": false, 00:13:37.230 "compare": false, 00:13:37.230 "compare_and_write": false, 00:13:37.230 "abort": true, 00:13:37.230 "seek_hole": false, 00:13:37.230 "seek_data": false, 00:13:37.230 "copy": true, 00:13:37.230 "nvme_iov_md": false 00:13:37.230 }, 00:13:37.230 "memory_domains": [ 00:13:37.230 { 00:13:37.230 "dma_device_id": "system", 00:13:37.230 "dma_device_type": 1 00:13:37.230 }, 00:13:37.230 { 00:13:37.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.230 "dma_device_type": 2 00:13:37.230 } 00:13:37.230 ], 00:13:37.230 "driver_specific": {} 00:13:37.230 } 00:13:37.230 ] 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.230 "name": "Existed_Raid", 00:13:37.230 "uuid": "84cfd1f4-0ccb-4fd6-89bf-4e1c79cc3583", 00:13:37.230 "strip_size_kb": 64, 00:13:37.230 "state": "online", 00:13:37.230 "raid_level": "raid5f", 00:13:37.230 "superblock": false, 00:13:37.230 "num_base_bdevs": 3, 00:13:37.230 "num_base_bdevs_discovered": 3, 00:13:37.230 "num_base_bdevs_operational": 3, 00:13:37.230 "base_bdevs_list": [ 00:13:37.230 { 00:13:37.230 "name": "BaseBdev1", 00:13:37.230 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:37.230 "is_configured": true, 00:13:37.230 "data_offset": 0, 00:13:37.230 "data_size": 65536 00:13:37.230 }, 00:13:37.230 { 00:13:37.230 "name": "BaseBdev2", 00:13:37.230 "uuid": "77af1991-f849-484a-9197-a953b49a4cae", 00:13:37.230 "is_configured": true, 00:13:37.230 "data_offset": 0, 00:13:37.230 "data_size": 65536 00:13:37.230 }, 00:13:37.230 { 00:13:37.230 "name": "BaseBdev3", 00:13:37.230 "uuid": "cc336cb4-0ec7-4b5b-8249-85b4b4da2445", 00:13:37.230 "is_configured": true, 00:13:37.230 "data_offset": 0, 00:13:37.230 "data_size": 65536 00:13:37.230 } 00:13:37.230 ] 00:13:37.230 }' 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.230 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.490 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.490 [2024-11-26 22:58:16.597263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.751 "name": "Existed_Raid", 00:13:37.751 "aliases": [ 00:13:37.751 "84cfd1f4-0ccb-4fd6-89bf-4e1c79cc3583" 00:13:37.751 ], 00:13:37.751 "product_name": "Raid Volume", 00:13:37.751 "block_size": 512, 00:13:37.751 "num_blocks": 131072, 00:13:37.751 "uuid": "84cfd1f4-0ccb-4fd6-89bf-4e1c79cc3583", 00:13:37.751 "assigned_rate_limits": { 00:13:37.751 "rw_ios_per_sec": 0, 00:13:37.751 "rw_mbytes_per_sec": 0, 00:13:37.751 "r_mbytes_per_sec": 0, 00:13:37.751 "w_mbytes_per_sec": 0 00:13:37.751 }, 00:13:37.751 "claimed": false, 00:13:37.751 "zoned": false, 00:13:37.751 "supported_io_types": { 00:13:37.751 "read": true, 00:13:37.751 "write": true, 00:13:37.751 "unmap": false, 00:13:37.751 "flush": false, 00:13:37.751 "reset": true, 00:13:37.751 "nvme_admin": false, 00:13:37.751 "nvme_io": false, 00:13:37.751 "nvme_io_md": false, 00:13:37.751 "write_zeroes": true, 00:13:37.751 "zcopy": false, 00:13:37.751 "get_zone_info": false, 00:13:37.751 "zone_management": false, 00:13:37.751 "zone_append": false, 00:13:37.751 "compare": false, 00:13:37.751 "compare_and_write": false, 00:13:37.751 "abort": false, 00:13:37.751 "seek_hole": false, 00:13:37.751 "seek_data": false, 00:13:37.751 "copy": false, 00:13:37.751 "nvme_iov_md": false 00:13:37.751 }, 00:13:37.751 "driver_specific": { 00:13:37.751 "raid": { 00:13:37.751 "uuid": "84cfd1f4-0ccb-4fd6-89bf-4e1c79cc3583", 00:13:37.751 "strip_size_kb": 64, 00:13:37.751 "state": "online", 00:13:37.751 "raid_level": "raid5f", 00:13:37.751 "superblock": false, 00:13:37.751 "num_base_bdevs": 3, 00:13:37.751 "num_base_bdevs_discovered": 3, 00:13:37.751 "num_base_bdevs_operational": 3, 00:13:37.751 "base_bdevs_list": [ 00:13:37.751 { 00:13:37.751 "name": "BaseBdev1", 00:13:37.751 "uuid": "f62237fd-cc39-42a8-baaa-e7b54228e5d3", 00:13:37.751 "is_configured": true, 00:13:37.751 "data_offset": 0, 00:13:37.751 "data_size": 65536 00:13:37.751 }, 00:13:37.751 { 00:13:37.751 "name": "BaseBdev2", 00:13:37.751 "uuid": "77af1991-f849-484a-9197-a953b49a4cae", 00:13:37.751 "is_configured": true, 00:13:37.751 "data_offset": 0, 00:13:37.751 "data_size": 65536 00:13:37.751 }, 00:13:37.751 { 00:13:37.751 "name": "BaseBdev3", 00:13:37.751 "uuid": "cc336cb4-0ec7-4b5b-8249-85b4b4da2445", 00:13:37.751 "is_configured": true, 00:13:37.751 "data_offset": 0, 00:13:37.751 "data_size": 65536 00:13:37.751 } 00:13:37.751 ] 00:13:37.751 } 00:13:37.751 } 00:13:37.751 }' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:37.751 BaseBdev2 00:13:37.751 BaseBdev3' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.751 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.012 [2024-11-26 22:58:16.889233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.012 "name": "Existed_Raid", 00:13:38.012 "uuid": "84cfd1f4-0ccb-4fd6-89bf-4e1c79cc3583", 00:13:38.012 "strip_size_kb": 64, 00:13:38.012 "state": "online", 00:13:38.012 "raid_level": "raid5f", 00:13:38.012 "superblock": false, 00:13:38.012 "num_base_bdevs": 3, 00:13:38.012 "num_base_bdevs_discovered": 2, 00:13:38.012 "num_base_bdevs_operational": 2, 00:13:38.012 "base_bdevs_list": [ 00:13:38.012 { 00:13:38.012 "name": null, 00:13:38.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.012 "is_configured": false, 00:13:38.012 "data_offset": 0, 00:13:38.012 "data_size": 65536 00:13:38.012 }, 00:13:38.012 { 00:13:38.012 "name": "BaseBdev2", 00:13:38.012 "uuid": "77af1991-f849-484a-9197-a953b49a4cae", 00:13:38.012 "is_configured": true, 00:13:38.012 "data_offset": 0, 00:13:38.012 "data_size": 65536 00:13:38.012 }, 00:13:38.012 { 00:13:38.012 "name": "BaseBdev3", 00:13:38.012 "uuid": "cc336cb4-0ec7-4b5b-8249-85b4b4da2445", 00:13:38.012 "is_configured": true, 00:13:38.012 "data_offset": 0, 00:13:38.012 "data_size": 65536 00:13:38.012 } 00:13:38.012 ] 00:13:38.012 }' 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.012 22:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.273 [2024-11-26 22:58:17.380547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.273 [2024-11-26 22:58:17.380686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.273 [2024-11-26 22:58:17.391807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.273 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.534 [2024-11-26 22:58:17.447856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:38.534 [2024-11-26 22:58:17.447957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.534 BaseBdev2 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:38.534 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 [ 00:13:38.535 { 00:13:38.535 "name": "BaseBdev2", 00:13:38.535 "aliases": [ 00:13:38.535 "a9d78a49-7866-4794-872a-514f32c2a21a" 00:13:38.535 ], 00:13:38.535 "product_name": "Malloc disk", 00:13:38.535 "block_size": 512, 00:13:38.535 "num_blocks": 65536, 00:13:38.535 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:38.535 "assigned_rate_limits": { 00:13:38.535 "rw_ios_per_sec": 0, 00:13:38.535 "rw_mbytes_per_sec": 0, 00:13:38.535 "r_mbytes_per_sec": 0, 00:13:38.535 "w_mbytes_per_sec": 0 00:13:38.535 }, 00:13:38.535 "claimed": false, 00:13:38.535 "zoned": false, 00:13:38.535 "supported_io_types": { 00:13:38.535 "read": true, 00:13:38.535 "write": true, 00:13:38.535 "unmap": true, 00:13:38.535 "flush": true, 00:13:38.535 "reset": true, 00:13:38.535 "nvme_admin": false, 00:13:38.535 "nvme_io": false, 00:13:38.535 "nvme_io_md": false, 00:13:38.535 "write_zeroes": true, 00:13:38.535 "zcopy": true, 00:13:38.535 "get_zone_info": false, 00:13:38.535 "zone_management": false, 00:13:38.535 "zone_append": false, 00:13:38.535 "compare": false, 00:13:38.535 "compare_and_write": false, 00:13:38.535 "abort": true, 00:13:38.535 "seek_hole": false, 00:13:38.535 "seek_data": false, 00:13:38.535 "copy": true, 00:13:38.535 "nvme_iov_md": false 00:13:38.535 }, 00:13:38.535 "memory_domains": [ 00:13:38.535 { 00:13:38.535 "dma_device_id": "system", 00:13:38.535 "dma_device_type": 1 00:13:38.535 }, 00:13:38.535 { 00:13:38.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.535 "dma_device_type": 2 00:13:38.535 } 00:13:38.535 ], 00:13:38.535 "driver_specific": {} 00:13:38.535 } 00:13:38.535 ] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 BaseBdev3 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 [ 00:13:38.535 { 00:13:38.535 "name": "BaseBdev3", 00:13:38.535 "aliases": [ 00:13:38.535 "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1" 00:13:38.535 ], 00:13:38.535 "product_name": "Malloc disk", 00:13:38.535 "block_size": 512, 00:13:38.535 "num_blocks": 65536, 00:13:38.535 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:38.535 "assigned_rate_limits": { 00:13:38.535 "rw_ios_per_sec": 0, 00:13:38.535 "rw_mbytes_per_sec": 0, 00:13:38.535 "r_mbytes_per_sec": 0, 00:13:38.535 "w_mbytes_per_sec": 0 00:13:38.535 }, 00:13:38.535 "claimed": false, 00:13:38.535 "zoned": false, 00:13:38.535 "supported_io_types": { 00:13:38.535 "read": true, 00:13:38.535 "write": true, 00:13:38.535 "unmap": true, 00:13:38.535 "flush": true, 00:13:38.535 "reset": true, 00:13:38.535 "nvme_admin": false, 00:13:38.535 "nvme_io": false, 00:13:38.535 "nvme_io_md": false, 00:13:38.535 "write_zeroes": true, 00:13:38.535 "zcopy": true, 00:13:38.535 "get_zone_info": false, 00:13:38.535 "zone_management": false, 00:13:38.535 "zone_append": false, 00:13:38.535 "compare": false, 00:13:38.535 "compare_and_write": false, 00:13:38.535 "abort": true, 00:13:38.535 "seek_hole": false, 00:13:38.535 "seek_data": false, 00:13:38.535 "copy": true, 00:13:38.535 "nvme_iov_md": false 00:13:38.535 }, 00:13:38.535 "memory_domains": [ 00:13:38.535 { 00:13:38.535 "dma_device_id": "system", 00:13:38.535 "dma_device_type": 1 00:13:38.535 }, 00:13:38.535 { 00:13:38.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.535 "dma_device_type": 2 00:13:38.535 } 00:13:38.535 ], 00:13:38.535 "driver_specific": {} 00:13:38.535 } 00:13:38.535 ] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.535 [2024-11-26 22:58:17.621714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.535 [2024-11-26 22:58:17.621835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.535 [2024-11-26 22:58:17.621872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.535 [2024-11-26 22:58:17.623595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.535 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.536 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.795 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.795 "name": "Existed_Raid", 00:13:38.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.795 "strip_size_kb": 64, 00:13:38.795 "state": "configuring", 00:13:38.795 "raid_level": "raid5f", 00:13:38.795 "superblock": false, 00:13:38.795 "num_base_bdevs": 3, 00:13:38.795 "num_base_bdevs_discovered": 2, 00:13:38.795 "num_base_bdevs_operational": 3, 00:13:38.796 "base_bdevs_list": [ 00:13:38.796 { 00:13:38.796 "name": "BaseBdev1", 00:13:38.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.796 "is_configured": false, 00:13:38.796 "data_offset": 0, 00:13:38.796 "data_size": 0 00:13:38.796 }, 00:13:38.796 { 00:13:38.796 "name": "BaseBdev2", 00:13:38.796 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:38.796 "is_configured": true, 00:13:38.796 "data_offset": 0, 00:13:38.796 "data_size": 65536 00:13:38.796 }, 00:13:38.796 { 00:13:38.796 "name": "BaseBdev3", 00:13:38.796 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:38.796 "is_configured": true, 00:13:38.796 "data_offset": 0, 00:13:38.796 "data_size": 65536 00:13:38.796 } 00:13:38.796 ] 00:13:38.796 }' 00:13:38.796 22:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.796 22:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.066 [2024-11-26 22:58:18.037789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.066 "name": "Existed_Raid", 00:13:39.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.066 "strip_size_kb": 64, 00:13:39.066 "state": "configuring", 00:13:39.066 "raid_level": "raid5f", 00:13:39.066 "superblock": false, 00:13:39.066 "num_base_bdevs": 3, 00:13:39.066 "num_base_bdevs_discovered": 1, 00:13:39.066 "num_base_bdevs_operational": 3, 00:13:39.066 "base_bdevs_list": [ 00:13:39.066 { 00:13:39.066 "name": "BaseBdev1", 00:13:39.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.066 "is_configured": false, 00:13:39.066 "data_offset": 0, 00:13:39.066 "data_size": 0 00:13:39.066 }, 00:13:39.066 { 00:13:39.066 "name": null, 00:13:39.066 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:39.066 "is_configured": false, 00:13:39.066 "data_offset": 0, 00:13:39.066 "data_size": 65536 00:13:39.066 }, 00:13:39.066 { 00:13:39.066 "name": "BaseBdev3", 00:13:39.066 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:39.066 "is_configured": true, 00:13:39.066 "data_offset": 0, 00:13:39.066 "data_size": 65536 00:13:39.066 } 00:13:39.066 ] 00:13:39.066 }' 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.066 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.347 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.347 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.347 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.347 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.347 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.623 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.624 [2024-11-26 22:58:18.500920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.624 BaseBdev1 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.624 [ 00:13:39.624 { 00:13:39.624 "name": "BaseBdev1", 00:13:39.624 "aliases": [ 00:13:39.624 "b3700506-47dd-429f-98fd-c2c5c6cb0469" 00:13:39.624 ], 00:13:39.624 "product_name": "Malloc disk", 00:13:39.624 "block_size": 512, 00:13:39.624 "num_blocks": 65536, 00:13:39.624 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:39.624 "assigned_rate_limits": { 00:13:39.624 "rw_ios_per_sec": 0, 00:13:39.624 "rw_mbytes_per_sec": 0, 00:13:39.624 "r_mbytes_per_sec": 0, 00:13:39.624 "w_mbytes_per_sec": 0 00:13:39.624 }, 00:13:39.624 "claimed": true, 00:13:39.624 "claim_type": "exclusive_write", 00:13:39.624 "zoned": false, 00:13:39.624 "supported_io_types": { 00:13:39.624 "read": true, 00:13:39.624 "write": true, 00:13:39.624 "unmap": true, 00:13:39.624 "flush": true, 00:13:39.624 "reset": true, 00:13:39.624 "nvme_admin": false, 00:13:39.624 "nvme_io": false, 00:13:39.624 "nvme_io_md": false, 00:13:39.624 "write_zeroes": true, 00:13:39.624 "zcopy": true, 00:13:39.624 "get_zone_info": false, 00:13:39.624 "zone_management": false, 00:13:39.624 "zone_append": false, 00:13:39.624 "compare": false, 00:13:39.624 "compare_and_write": false, 00:13:39.624 "abort": true, 00:13:39.624 "seek_hole": false, 00:13:39.624 "seek_data": false, 00:13:39.624 "copy": true, 00:13:39.624 "nvme_iov_md": false 00:13:39.624 }, 00:13:39.624 "memory_domains": [ 00:13:39.624 { 00:13:39.624 "dma_device_id": "system", 00:13:39.624 "dma_device_type": 1 00:13:39.624 }, 00:13:39.624 { 00:13:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.624 "dma_device_type": 2 00:13:39.624 } 00:13:39.624 ], 00:13:39.624 "driver_specific": {} 00:13:39.624 } 00:13:39.624 ] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.624 "name": "Existed_Raid", 00:13:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.624 "strip_size_kb": 64, 00:13:39.624 "state": "configuring", 00:13:39.624 "raid_level": "raid5f", 00:13:39.624 "superblock": false, 00:13:39.624 "num_base_bdevs": 3, 00:13:39.624 "num_base_bdevs_discovered": 2, 00:13:39.624 "num_base_bdevs_operational": 3, 00:13:39.624 "base_bdevs_list": [ 00:13:39.624 { 00:13:39.624 "name": "BaseBdev1", 00:13:39.624 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:39.624 "is_configured": true, 00:13:39.624 "data_offset": 0, 00:13:39.624 "data_size": 65536 00:13:39.624 }, 00:13:39.624 { 00:13:39.624 "name": null, 00:13:39.624 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:39.624 "is_configured": false, 00:13:39.624 "data_offset": 0, 00:13:39.624 "data_size": 65536 00:13:39.624 }, 00:13:39.624 { 00:13:39.624 "name": "BaseBdev3", 00:13:39.624 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:39.624 "is_configured": true, 00:13:39.624 "data_offset": 0, 00:13:39.624 "data_size": 65536 00:13:39.624 } 00:13:39.624 ] 00:13:39.624 }' 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.624 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.884 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.884 22:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.884 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.884 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.884 22:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.884 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:39.884 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:39.884 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.884 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.884 [2024-11-26 22:58:19.009092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.144 "name": "Existed_Raid", 00:13:40.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.144 "strip_size_kb": 64, 00:13:40.144 "state": "configuring", 00:13:40.144 "raid_level": "raid5f", 00:13:40.144 "superblock": false, 00:13:40.144 "num_base_bdevs": 3, 00:13:40.144 "num_base_bdevs_discovered": 1, 00:13:40.144 "num_base_bdevs_operational": 3, 00:13:40.144 "base_bdevs_list": [ 00:13:40.144 { 00:13:40.144 "name": "BaseBdev1", 00:13:40.144 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:40.144 "is_configured": true, 00:13:40.144 "data_offset": 0, 00:13:40.144 "data_size": 65536 00:13:40.144 }, 00:13:40.144 { 00:13:40.144 "name": null, 00:13:40.144 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:40.144 "is_configured": false, 00:13:40.144 "data_offset": 0, 00:13:40.144 "data_size": 65536 00:13:40.144 }, 00:13:40.144 { 00:13:40.144 "name": null, 00:13:40.144 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:40.144 "is_configured": false, 00:13:40.144 "data_offset": 0, 00:13:40.144 "data_size": 65536 00:13:40.144 } 00:13:40.144 ] 00:13:40.144 }' 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.144 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 [2024-11-26 22:58:19.493245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.662 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.662 "name": "Existed_Raid", 00:13:40.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.662 "strip_size_kb": 64, 00:13:40.662 "state": "configuring", 00:13:40.662 "raid_level": "raid5f", 00:13:40.662 "superblock": false, 00:13:40.662 "num_base_bdevs": 3, 00:13:40.662 "num_base_bdevs_discovered": 2, 00:13:40.662 "num_base_bdevs_operational": 3, 00:13:40.662 "base_bdevs_list": [ 00:13:40.662 { 00:13:40.662 "name": "BaseBdev1", 00:13:40.662 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:40.662 "is_configured": true, 00:13:40.662 "data_offset": 0, 00:13:40.662 "data_size": 65536 00:13:40.662 }, 00:13:40.662 { 00:13:40.662 "name": null, 00:13:40.662 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:40.662 "is_configured": false, 00:13:40.662 "data_offset": 0, 00:13:40.662 "data_size": 65536 00:13:40.662 }, 00:13:40.662 { 00:13:40.662 "name": "BaseBdev3", 00:13:40.662 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:40.662 "is_configured": true, 00:13:40.662 "data_offset": 0, 00:13:40.662 "data_size": 65536 00:13:40.662 } 00:13:40.662 ] 00:13:40.662 }' 00:13:40.662 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.662 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.922 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.922 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.922 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.922 22:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.922 22:58:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.922 [2024-11-26 22:58:20.025401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.922 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.182 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.182 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.182 "name": "Existed_Raid", 00:13:41.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.182 "strip_size_kb": 64, 00:13:41.182 "state": "configuring", 00:13:41.182 "raid_level": "raid5f", 00:13:41.182 "superblock": false, 00:13:41.182 "num_base_bdevs": 3, 00:13:41.182 "num_base_bdevs_discovered": 1, 00:13:41.182 "num_base_bdevs_operational": 3, 00:13:41.182 "base_bdevs_list": [ 00:13:41.182 { 00:13:41.182 "name": null, 00:13:41.182 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:41.182 "is_configured": false, 00:13:41.182 "data_offset": 0, 00:13:41.182 "data_size": 65536 00:13:41.182 }, 00:13:41.182 { 00:13:41.182 "name": null, 00:13:41.182 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:41.182 "is_configured": false, 00:13:41.182 "data_offset": 0, 00:13:41.182 "data_size": 65536 00:13:41.182 }, 00:13:41.182 { 00:13:41.182 "name": "BaseBdev3", 00:13:41.182 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:41.182 "is_configured": true, 00:13:41.182 "data_offset": 0, 00:13:41.182 "data_size": 65536 00:13:41.182 } 00:13:41.182 ] 00:13:41.182 }' 00:13:41.182 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.182 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 [2024-11-26 22:58:20.528072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.442 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.702 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.702 "name": "Existed_Raid", 00:13:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.702 "strip_size_kb": 64, 00:13:41.702 "state": "configuring", 00:13:41.702 "raid_level": "raid5f", 00:13:41.702 "superblock": false, 00:13:41.702 "num_base_bdevs": 3, 00:13:41.702 "num_base_bdevs_discovered": 2, 00:13:41.702 "num_base_bdevs_operational": 3, 00:13:41.702 "base_bdevs_list": [ 00:13:41.702 { 00:13:41.702 "name": null, 00:13:41.702 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:41.702 "is_configured": false, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "name": "BaseBdev2", 00:13:41.702 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:41.702 "is_configured": true, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "name": "BaseBdev3", 00:13:41.702 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:41.702 "is_configured": true, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 } 00:13:41.702 ] 00:13:41.702 }' 00:13:41.702 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.702 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.962 22:58:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.962 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b3700506-47dd-429f-98fd-c2c5c6cb0469 00:13:41.962 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.962 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.962 [2024-11-26 22:58:21.030686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.962 [2024-11-26 22:58:21.030733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:41.962 [2024-11-26 22:58:21.030740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:41.962 [2024-11-26 22:58:21.030995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:41.963 [2024-11-26 22:58:21.031409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:41.963 [2024-11-26 22:58:21.031434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:41.963 [2024-11-26 22:58:21.031605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.963 NewBaseBdev 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.963 [ 00:13:41.963 { 00:13:41.963 "name": "NewBaseBdev", 00:13:41.963 "aliases": [ 00:13:41.963 "b3700506-47dd-429f-98fd-c2c5c6cb0469" 00:13:41.963 ], 00:13:41.963 "product_name": "Malloc disk", 00:13:41.963 "block_size": 512, 00:13:41.963 "num_blocks": 65536, 00:13:41.963 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:41.963 "assigned_rate_limits": { 00:13:41.963 "rw_ios_per_sec": 0, 00:13:41.963 "rw_mbytes_per_sec": 0, 00:13:41.963 "r_mbytes_per_sec": 0, 00:13:41.963 "w_mbytes_per_sec": 0 00:13:41.963 }, 00:13:41.963 "claimed": true, 00:13:41.963 "claim_type": "exclusive_write", 00:13:41.963 "zoned": false, 00:13:41.963 "supported_io_types": { 00:13:41.963 "read": true, 00:13:41.963 "write": true, 00:13:41.963 "unmap": true, 00:13:41.963 "flush": true, 00:13:41.963 "reset": true, 00:13:41.963 "nvme_admin": false, 00:13:41.963 "nvme_io": false, 00:13:41.963 "nvme_io_md": false, 00:13:41.963 "write_zeroes": true, 00:13:41.963 "zcopy": true, 00:13:41.963 "get_zone_info": false, 00:13:41.963 "zone_management": false, 00:13:41.963 "zone_append": false, 00:13:41.963 "compare": false, 00:13:41.963 "compare_and_write": false, 00:13:41.963 "abort": true, 00:13:41.963 "seek_hole": false, 00:13:41.963 "seek_data": false, 00:13:41.963 "copy": true, 00:13:41.963 "nvme_iov_md": false 00:13:41.963 }, 00:13:41.963 "memory_domains": [ 00:13:41.963 { 00:13:41.963 "dma_device_id": "system", 00:13:41.963 "dma_device_type": 1 00:13:41.963 }, 00:13:41.963 { 00:13:41.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.963 "dma_device_type": 2 00:13:41.963 } 00:13:41.963 ], 00:13:41.963 "driver_specific": {} 00:13:41.963 } 00:13:41.963 ] 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.963 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.223 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.223 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.223 "name": "Existed_Raid", 00:13:42.223 "uuid": "3745008f-84e7-4197-aeef-a7dcadeef6a0", 00:13:42.223 "strip_size_kb": 64, 00:13:42.223 "state": "online", 00:13:42.223 "raid_level": "raid5f", 00:13:42.223 "superblock": false, 00:13:42.223 "num_base_bdevs": 3, 00:13:42.223 "num_base_bdevs_discovered": 3, 00:13:42.223 "num_base_bdevs_operational": 3, 00:13:42.223 "base_bdevs_list": [ 00:13:42.223 { 00:13:42.223 "name": "NewBaseBdev", 00:13:42.223 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:42.223 "is_configured": true, 00:13:42.223 "data_offset": 0, 00:13:42.223 "data_size": 65536 00:13:42.223 }, 00:13:42.223 { 00:13:42.223 "name": "BaseBdev2", 00:13:42.223 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:42.223 "is_configured": true, 00:13:42.223 "data_offset": 0, 00:13:42.223 "data_size": 65536 00:13:42.223 }, 00:13:42.223 { 00:13:42.223 "name": "BaseBdev3", 00:13:42.223 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:42.223 "is_configured": true, 00:13:42.223 "data_offset": 0, 00:13:42.223 "data_size": 65536 00:13:42.223 } 00:13:42.223 ] 00:13:42.223 }' 00:13:42.223 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.223 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.482 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.483 [2024-11-26 22:58:21.551040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.483 "name": "Existed_Raid", 00:13:42.483 "aliases": [ 00:13:42.483 "3745008f-84e7-4197-aeef-a7dcadeef6a0" 00:13:42.483 ], 00:13:42.483 "product_name": "Raid Volume", 00:13:42.483 "block_size": 512, 00:13:42.483 "num_blocks": 131072, 00:13:42.483 "uuid": "3745008f-84e7-4197-aeef-a7dcadeef6a0", 00:13:42.483 "assigned_rate_limits": { 00:13:42.483 "rw_ios_per_sec": 0, 00:13:42.483 "rw_mbytes_per_sec": 0, 00:13:42.483 "r_mbytes_per_sec": 0, 00:13:42.483 "w_mbytes_per_sec": 0 00:13:42.483 }, 00:13:42.483 "claimed": false, 00:13:42.483 "zoned": false, 00:13:42.483 "supported_io_types": { 00:13:42.483 "read": true, 00:13:42.483 "write": true, 00:13:42.483 "unmap": false, 00:13:42.483 "flush": false, 00:13:42.483 "reset": true, 00:13:42.483 "nvme_admin": false, 00:13:42.483 "nvme_io": false, 00:13:42.483 "nvme_io_md": false, 00:13:42.483 "write_zeroes": true, 00:13:42.483 "zcopy": false, 00:13:42.483 "get_zone_info": false, 00:13:42.483 "zone_management": false, 00:13:42.483 "zone_append": false, 00:13:42.483 "compare": false, 00:13:42.483 "compare_and_write": false, 00:13:42.483 "abort": false, 00:13:42.483 "seek_hole": false, 00:13:42.483 "seek_data": false, 00:13:42.483 "copy": false, 00:13:42.483 "nvme_iov_md": false 00:13:42.483 }, 00:13:42.483 "driver_specific": { 00:13:42.483 "raid": { 00:13:42.483 "uuid": "3745008f-84e7-4197-aeef-a7dcadeef6a0", 00:13:42.483 "strip_size_kb": 64, 00:13:42.483 "state": "online", 00:13:42.483 "raid_level": "raid5f", 00:13:42.483 "superblock": false, 00:13:42.483 "num_base_bdevs": 3, 00:13:42.483 "num_base_bdevs_discovered": 3, 00:13:42.483 "num_base_bdevs_operational": 3, 00:13:42.483 "base_bdevs_list": [ 00:13:42.483 { 00:13:42.483 "name": "NewBaseBdev", 00:13:42.483 "uuid": "b3700506-47dd-429f-98fd-c2c5c6cb0469", 00:13:42.483 "is_configured": true, 00:13:42.483 "data_offset": 0, 00:13:42.483 "data_size": 65536 00:13:42.483 }, 00:13:42.483 { 00:13:42.483 "name": "BaseBdev2", 00:13:42.483 "uuid": "a9d78a49-7866-4794-872a-514f32c2a21a", 00:13:42.483 "is_configured": true, 00:13:42.483 "data_offset": 0, 00:13:42.483 "data_size": 65536 00:13:42.483 }, 00:13:42.483 { 00:13:42.483 "name": "BaseBdev3", 00:13:42.483 "uuid": "0e10c956-f8b1-4c2b-a41f-22f9bb8149a1", 00:13:42.483 "is_configured": true, 00:13:42.483 "data_offset": 0, 00:13:42.483 "data_size": 65536 00:13:42.483 } 00:13:42.483 ] 00:13:42.483 } 00:13:42.483 } 00:13:42.483 }' 00:13:42.483 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:42.743 BaseBdev2 00:13:42.743 BaseBdev3' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 [2024-11-26 22:58:21.794925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.743 [2024-11-26 22:58:21.794953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.743 [2024-11-26 22:58:21.795009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.743 [2024-11-26 22:58:21.795241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.743 [2024-11-26 22:58:21.795286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92093 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 92093 ']' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 92093 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92093 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.743 killing process with pid 92093 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92093' 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 92093 00:13:42.743 [2024-11-26 22:58:21.843949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.743 22:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 92093 00:13:43.004 [2024-11-26 22:58:21.874886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.004 22:58:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:43.004 00:13:43.004 real 0m8.994s 00:13:43.004 user 0m15.267s 00:13:43.004 sys 0m2.023s 00:13:43.004 22:58:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.004 22:58:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.004 ************************************ 00:13:43.004 END TEST raid5f_state_function_test 00:13:43.004 ************************************ 00:13:43.264 22:58:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:43.264 22:58:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:43.264 22:58:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.264 22:58:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.264 ************************************ 00:13:43.264 START TEST raid5f_state_function_test_sb 00:13:43.264 ************************************ 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.264 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:43.265 Process raid pid: 92702 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92702 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92702' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92702 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92702 ']' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.265 22:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.265 [2024-11-26 22:58:22.292501] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:43.265 [2024-11-26 22:58:22.292609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.525 [2024-11-26 22:58:22.426976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.525 [2024-11-26 22:58:22.466336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.525 [2024-11-26 22:58:22.493815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.525 [2024-11-26 22:58:22.537407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.525 [2024-11-26 22:58:22.537451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.095 [2024-11-26 22:58:23.106144] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.095 [2024-11-26 22:58:23.106190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.095 [2024-11-26 22:58:23.106201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.095 [2024-11-26 22:58:23.106208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.095 [2024-11-26 22:58:23.106221] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.095 [2024-11-26 22:58:23.106227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.095 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.095 "name": "Existed_Raid", 00:13:44.095 "uuid": "3f0029f5-45a7-4a39-87c5-9fac0921bb20", 00:13:44.095 "strip_size_kb": 64, 00:13:44.095 "state": "configuring", 00:13:44.095 "raid_level": "raid5f", 00:13:44.095 "superblock": true, 00:13:44.096 "num_base_bdevs": 3, 00:13:44.096 "num_base_bdevs_discovered": 0, 00:13:44.096 "num_base_bdevs_operational": 3, 00:13:44.096 "base_bdevs_list": [ 00:13:44.096 { 00:13:44.096 "name": "BaseBdev1", 00:13:44.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.096 "is_configured": false, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 0 00:13:44.096 }, 00:13:44.096 { 00:13:44.096 "name": "BaseBdev2", 00:13:44.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.096 "is_configured": false, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 0 00:13:44.096 }, 00:13:44.096 { 00:13:44.096 "name": "BaseBdev3", 00:13:44.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.096 "is_configured": false, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 0 00:13:44.096 } 00:13:44.096 ] 00:13:44.096 }' 00:13:44.096 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.096 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 [2024-11-26 22:58:23.502147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.665 [2024-11-26 22:58:23.502177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 [2024-11-26 22:58:23.514185] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.665 [2024-11-26 22:58:23.514214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.665 [2024-11-26 22:58:23.514223] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.665 [2024-11-26 22:58:23.514229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.665 [2024-11-26 22:58:23.514237] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.665 [2024-11-26 22:58:23.514245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 [2024-11-26 22:58:23.534948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.665 BaseBdev1 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 [ 00:13:44.665 { 00:13:44.665 "name": "BaseBdev1", 00:13:44.665 "aliases": [ 00:13:44.665 "3caec68b-fbaa-4aed-bf14-d6eab3d6525c" 00:13:44.665 ], 00:13:44.665 "product_name": "Malloc disk", 00:13:44.665 "block_size": 512, 00:13:44.665 "num_blocks": 65536, 00:13:44.665 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:44.665 "assigned_rate_limits": { 00:13:44.665 "rw_ios_per_sec": 0, 00:13:44.665 "rw_mbytes_per_sec": 0, 00:13:44.665 "r_mbytes_per_sec": 0, 00:13:44.665 "w_mbytes_per_sec": 0 00:13:44.665 }, 00:13:44.665 "claimed": true, 00:13:44.665 "claim_type": "exclusive_write", 00:13:44.665 "zoned": false, 00:13:44.665 "supported_io_types": { 00:13:44.665 "read": true, 00:13:44.665 "write": true, 00:13:44.665 "unmap": true, 00:13:44.665 "flush": true, 00:13:44.665 "reset": true, 00:13:44.665 "nvme_admin": false, 00:13:44.665 "nvme_io": false, 00:13:44.665 "nvme_io_md": false, 00:13:44.665 "write_zeroes": true, 00:13:44.665 "zcopy": true, 00:13:44.665 "get_zone_info": false, 00:13:44.665 "zone_management": false, 00:13:44.665 "zone_append": false, 00:13:44.665 "compare": false, 00:13:44.665 "compare_and_write": false, 00:13:44.665 "abort": true, 00:13:44.665 "seek_hole": false, 00:13:44.665 "seek_data": false, 00:13:44.665 "copy": true, 00:13:44.665 "nvme_iov_md": false 00:13:44.665 }, 00:13:44.665 "memory_domains": [ 00:13:44.665 { 00:13:44.665 "dma_device_id": "system", 00:13:44.665 "dma_device_type": 1 00:13:44.665 }, 00:13:44.665 { 00:13:44.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.665 "dma_device_type": 2 00:13:44.665 } 00:13:44.665 ], 00:13:44.665 "driver_specific": {} 00:13:44.665 } 00:13:44.665 ] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:44.665 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.666 "name": "Existed_Raid", 00:13:44.666 "uuid": "e36e30c1-5eb2-4113-b406-20e83f1ae3a1", 00:13:44.666 "strip_size_kb": 64, 00:13:44.666 "state": "configuring", 00:13:44.666 "raid_level": "raid5f", 00:13:44.666 "superblock": true, 00:13:44.666 "num_base_bdevs": 3, 00:13:44.666 "num_base_bdevs_discovered": 1, 00:13:44.666 "num_base_bdevs_operational": 3, 00:13:44.666 "base_bdevs_list": [ 00:13:44.666 { 00:13:44.666 "name": "BaseBdev1", 00:13:44.666 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:44.666 "is_configured": true, 00:13:44.666 "data_offset": 2048, 00:13:44.666 "data_size": 63488 00:13:44.666 }, 00:13:44.666 { 00:13:44.666 "name": "BaseBdev2", 00:13:44.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.666 "is_configured": false, 00:13:44.666 "data_offset": 0, 00:13:44.666 "data_size": 0 00:13:44.666 }, 00:13:44.666 { 00:13:44.666 "name": "BaseBdev3", 00:13:44.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.666 "is_configured": false, 00:13:44.666 "data_offset": 0, 00:13:44.666 "data_size": 0 00:13:44.666 } 00:13:44.666 ] 00:13:44.666 }' 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.666 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.925 22:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.925 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.925 22:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 [2024-11-26 22:58:24.003072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.926 [2024-11-26 22:58:24.003115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 [2024-11-26 22:58:24.015120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.926 [2024-11-26 22:58:24.016859] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.926 [2024-11-26 22:58:24.016893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.926 [2024-11-26 22:58:24.016905] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.926 [2024-11-26 22:58:24.016912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.186 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.186 "name": "Existed_Raid", 00:13:45.186 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:45.186 "strip_size_kb": 64, 00:13:45.186 "state": "configuring", 00:13:45.186 "raid_level": "raid5f", 00:13:45.186 "superblock": true, 00:13:45.186 "num_base_bdevs": 3, 00:13:45.186 "num_base_bdevs_discovered": 1, 00:13:45.186 "num_base_bdevs_operational": 3, 00:13:45.186 "base_bdevs_list": [ 00:13:45.186 { 00:13:45.186 "name": "BaseBdev1", 00:13:45.186 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:45.186 "is_configured": true, 00:13:45.186 "data_offset": 2048, 00:13:45.186 "data_size": 63488 00:13:45.186 }, 00:13:45.186 { 00:13:45.186 "name": "BaseBdev2", 00:13:45.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.186 "is_configured": false, 00:13:45.186 "data_offset": 0, 00:13:45.186 "data_size": 0 00:13:45.186 }, 00:13:45.186 { 00:13:45.186 "name": "BaseBdev3", 00:13:45.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.186 "is_configured": false, 00:13:45.186 "data_offset": 0, 00:13:45.186 "data_size": 0 00:13:45.186 } 00:13:45.186 ] 00:13:45.186 }' 00:13:45.186 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.186 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 [2024-11-26 22:58:24.453996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.447 BaseBdev2 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 [ 00:13:45.447 { 00:13:45.447 "name": "BaseBdev2", 00:13:45.447 "aliases": [ 00:13:45.447 "397c8da9-c679-443d-8e4c-f8cf06c8228d" 00:13:45.447 ], 00:13:45.447 "product_name": "Malloc disk", 00:13:45.447 "block_size": 512, 00:13:45.447 "num_blocks": 65536, 00:13:45.447 "uuid": "397c8da9-c679-443d-8e4c-f8cf06c8228d", 00:13:45.447 "assigned_rate_limits": { 00:13:45.447 "rw_ios_per_sec": 0, 00:13:45.447 "rw_mbytes_per_sec": 0, 00:13:45.447 "r_mbytes_per_sec": 0, 00:13:45.447 "w_mbytes_per_sec": 0 00:13:45.447 }, 00:13:45.447 "claimed": true, 00:13:45.447 "claim_type": "exclusive_write", 00:13:45.447 "zoned": false, 00:13:45.447 "supported_io_types": { 00:13:45.447 "read": true, 00:13:45.447 "write": true, 00:13:45.447 "unmap": true, 00:13:45.447 "flush": true, 00:13:45.447 "reset": true, 00:13:45.447 "nvme_admin": false, 00:13:45.447 "nvme_io": false, 00:13:45.447 "nvme_io_md": false, 00:13:45.447 "write_zeroes": true, 00:13:45.447 "zcopy": true, 00:13:45.447 "get_zone_info": false, 00:13:45.447 "zone_management": false, 00:13:45.447 "zone_append": false, 00:13:45.447 "compare": false, 00:13:45.447 "compare_and_write": false, 00:13:45.447 "abort": true, 00:13:45.447 "seek_hole": false, 00:13:45.447 "seek_data": false, 00:13:45.447 "copy": true, 00:13:45.447 "nvme_iov_md": false 00:13:45.447 }, 00:13:45.447 "memory_domains": [ 00:13:45.447 { 00:13:45.447 "dma_device_id": "system", 00:13:45.447 "dma_device_type": 1 00:13:45.447 }, 00:13:45.447 { 00:13:45.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.447 "dma_device_type": 2 00:13:45.447 } 00:13:45.447 ], 00:13:45.447 "driver_specific": {} 00:13:45.447 } 00:13:45.447 ] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.447 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.447 "name": "Existed_Raid", 00:13:45.447 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:45.447 "strip_size_kb": 64, 00:13:45.447 "state": "configuring", 00:13:45.447 "raid_level": "raid5f", 00:13:45.447 "superblock": true, 00:13:45.447 "num_base_bdevs": 3, 00:13:45.447 "num_base_bdevs_discovered": 2, 00:13:45.447 "num_base_bdevs_operational": 3, 00:13:45.447 "base_bdevs_list": [ 00:13:45.447 { 00:13:45.447 "name": "BaseBdev1", 00:13:45.447 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:45.447 "is_configured": true, 00:13:45.447 "data_offset": 2048, 00:13:45.447 "data_size": 63488 00:13:45.447 }, 00:13:45.447 { 00:13:45.447 "name": "BaseBdev2", 00:13:45.447 "uuid": "397c8da9-c679-443d-8e4c-f8cf06c8228d", 00:13:45.447 "is_configured": true, 00:13:45.447 "data_offset": 2048, 00:13:45.447 "data_size": 63488 00:13:45.447 }, 00:13:45.447 { 00:13:45.447 "name": "BaseBdev3", 00:13:45.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.447 "is_configured": false, 00:13:45.447 "data_offset": 0, 00:13:45.447 "data_size": 0 00:13:45.447 } 00:13:45.447 ] 00:13:45.447 }' 00:13:45.448 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.448 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 [2024-11-26 22:58:24.989746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.018 [2024-11-26 22:58:24.990441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:46.018 [2024-11-26 22:58:24.990511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:46.018 BaseBdev3 00:13:46.018 [2024-11-26 22:58:24.991596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:46.018 [2024-11-26 22:58:24.993121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.018 [2024-11-26 22:58:24.993220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.018 [2024-11-26 22:58:24.993727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 22:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 [ 00:13:46.018 { 00:13:46.018 "name": "BaseBdev3", 00:13:46.018 "aliases": [ 00:13:46.018 "debd6d9f-d747-4c44-8e7e-c67ae04f6f26" 00:13:46.018 ], 00:13:46.018 "product_name": "Malloc disk", 00:13:46.018 "block_size": 512, 00:13:46.018 "num_blocks": 65536, 00:13:46.018 "uuid": "debd6d9f-d747-4c44-8e7e-c67ae04f6f26", 00:13:46.018 "assigned_rate_limits": { 00:13:46.018 "rw_ios_per_sec": 0, 00:13:46.018 "rw_mbytes_per_sec": 0, 00:13:46.018 "r_mbytes_per_sec": 0, 00:13:46.018 "w_mbytes_per_sec": 0 00:13:46.018 }, 00:13:46.018 "claimed": true, 00:13:46.018 "claim_type": "exclusive_write", 00:13:46.018 "zoned": false, 00:13:46.018 "supported_io_types": { 00:13:46.018 "read": true, 00:13:46.018 "write": true, 00:13:46.018 "unmap": true, 00:13:46.018 "flush": true, 00:13:46.018 "reset": true, 00:13:46.018 "nvme_admin": false, 00:13:46.018 "nvme_io": false, 00:13:46.018 "nvme_io_md": false, 00:13:46.018 "write_zeroes": true, 00:13:46.018 "zcopy": true, 00:13:46.018 "get_zone_info": false, 00:13:46.018 "zone_management": false, 00:13:46.018 "zone_append": false, 00:13:46.018 "compare": false, 00:13:46.018 "compare_and_write": false, 00:13:46.018 "abort": true, 00:13:46.018 "seek_hole": false, 00:13:46.018 "seek_data": false, 00:13:46.018 "copy": true, 00:13:46.018 "nvme_iov_md": false 00:13:46.018 }, 00:13:46.018 "memory_domains": [ 00:13:46.018 { 00:13:46.018 "dma_device_id": "system", 00:13:46.018 "dma_device_type": 1 00:13:46.018 }, 00:13:46.018 { 00:13:46.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.018 "dma_device_type": 2 00:13:46.018 } 00:13:46.018 ], 00:13:46.018 "driver_specific": {} 00:13:46.018 } 00:13:46.018 ] 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:46.018 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.019 "name": "Existed_Raid", 00:13:46.019 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:46.019 "strip_size_kb": 64, 00:13:46.019 "state": "online", 00:13:46.019 "raid_level": "raid5f", 00:13:46.019 "superblock": true, 00:13:46.019 "num_base_bdevs": 3, 00:13:46.019 "num_base_bdevs_discovered": 3, 00:13:46.019 "num_base_bdevs_operational": 3, 00:13:46.019 "base_bdevs_list": [ 00:13:46.019 { 00:13:46.019 "name": "BaseBdev1", 00:13:46.019 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:46.019 "is_configured": true, 00:13:46.019 "data_offset": 2048, 00:13:46.019 "data_size": 63488 00:13:46.019 }, 00:13:46.019 { 00:13:46.019 "name": "BaseBdev2", 00:13:46.019 "uuid": "397c8da9-c679-443d-8e4c-f8cf06c8228d", 00:13:46.019 "is_configured": true, 00:13:46.019 "data_offset": 2048, 00:13:46.019 "data_size": 63488 00:13:46.019 }, 00:13:46.019 { 00:13:46.019 "name": "BaseBdev3", 00:13:46.019 "uuid": "debd6d9f-d747-4c44-8e7e-c67ae04f6f26", 00:13:46.019 "is_configured": true, 00:13:46.019 "data_offset": 2048, 00:13:46.019 "data_size": 63488 00:13:46.019 } 00:13:46.019 ] 00:13:46.019 }' 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.019 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.589 [2024-11-26 22:58:25.489984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.589 "name": "Existed_Raid", 00:13:46.589 "aliases": [ 00:13:46.589 "27a96980-2eb1-43a9-a193-ff0e2fb82614" 00:13:46.589 ], 00:13:46.589 "product_name": "Raid Volume", 00:13:46.589 "block_size": 512, 00:13:46.589 "num_blocks": 126976, 00:13:46.589 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:46.589 "assigned_rate_limits": { 00:13:46.589 "rw_ios_per_sec": 0, 00:13:46.589 "rw_mbytes_per_sec": 0, 00:13:46.589 "r_mbytes_per_sec": 0, 00:13:46.589 "w_mbytes_per_sec": 0 00:13:46.589 }, 00:13:46.589 "claimed": false, 00:13:46.589 "zoned": false, 00:13:46.589 "supported_io_types": { 00:13:46.589 "read": true, 00:13:46.589 "write": true, 00:13:46.589 "unmap": false, 00:13:46.589 "flush": false, 00:13:46.589 "reset": true, 00:13:46.589 "nvme_admin": false, 00:13:46.589 "nvme_io": false, 00:13:46.589 "nvme_io_md": false, 00:13:46.589 "write_zeroes": true, 00:13:46.589 "zcopy": false, 00:13:46.589 "get_zone_info": false, 00:13:46.589 "zone_management": false, 00:13:46.589 "zone_append": false, 00:13:46.589 "compare": false, 00:13:46.589 "compare_and_write": false, 00:13:46.589 "abort": false, 00:13:46.589 "seek_hole": false, 00:13:46.589 "seek_data": false, 00:13:46.589 "copy": false, 00:13:46.589 "nvme_iov_md": false 00:13:46.589 }, 00:13:46.589 "driver_specific": { 00:13:46.589 "raid": { 00:13:46.589 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:46.589 "strip_size_kb": 64, 00:13:46.589 "state": "online", 00:13:46.589 "raid_level": "raid5f", 00:13:46.589 "superblock": true, 00:13:46.589 "num_base_bdevs": 3, 00:13:46.589 "num_base_bdevs_discovered": 3, 00:13:46.589 "num_base_bdevs_operational": 3, 00:13:46.589 "base_bdevs_list": [ 00:13:46.589 { 00:13:46.589 "name": "BaseBdev1", 00:13:46.589 "uuid": "3caec68b-fbaa-4aed-bf14-d6eab3d6525c", 00:13:46.589 "is_configured": true, 00:13:46.589 "data_offset": 2048, 00:13:46.589 "data_size": 63488 00:13:46.589 }, 00:13:46.589 { 00:13:46.589 "name": "BaseBdev2", 00:13:46.589 "uuid": "397c8da9-c679-443d-8e4c-f8cf06c8228d", 00:13:46.589 "is_configured": true, 00:13:46.589 "data_offset": 2048, 00:13:46.589 "data_size": 63488 00:13:46.589 }, 00:13:46.589 { 00:13:46.589 "name": "BaseBdev3", 00:13:46.589 "uuid": "debd6d9f-d747-4c44-8e7e-c67ae04f6f26", 00:13:46.589 "is_configured": true, 00:13:46.589 "data_offset": 2048, 00:13:46.589 "data_size": 63488 00:13:46.589 } 00:13:46.589 ] 00:13:46.589 } 00:13:46.589 } 00:13:46.589 }' 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:46.589 BaseBdev2 00:13:46.589 BaseBdev3' 00:13:46.589 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.590 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.850 [2024-11-26 22:58:25.737937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.850 "name": "Existed_Raid", 00:13:46.850 "uuid": "27a96980-2eb1-43a9-a193-ff0e2fb82614", 00:13:46.850 "strip_size_kb": 64, 00:13:46.850 "state": "online", 00:13:46.850 "raid_level": "raid5f", 00:13:46.850 "superblock": true, 00:13:46.850 "num_base_bdevs": 3, 00:13:46.850 "num_base_bdevs_discovered": 2, 00:13:46.850 "num_base_bdevs_operational": 2, 00:13:46.850 "base_bdevs_list": [ 00:13:46.850 { 00:13:46.850 "name": null, 00:13:46.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.850 "is_configured": false, 00:13:46.850 "data_offset": 0, 00:13:46.850 "data_size": 63488 00:13:46.850 }, 00:13:46.850 { 00:13:46.850 "name": "BaseBdev2", 00:13:46.850 "uuid": "397c8da9-c679-443d-8e4c-f8cf06c8228d", 00:13:46.850 "is_configured": true, 00:13:46.850 "data_offset": 2048, 00:13:46.850 "data_size": 63488 00:13:46.850 }, 00:13:46.850 { 00:13:46.850 "name": "BaseBdev3", 00:13:46.850 "uuid": "debd6d9f-d747-4c44-8e7e-c67ae04f6f26", 00:13:46.850 "is_configured": true, 00:13:46.850 "data_offset": 2048, 00:13:46.850 "data_size": 63488 00:13:46.850 } 00:13:46.850 ] 00:13:46.850 }' 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.850 22:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.110 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-11-26 22:58:26.237207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.371 [2024-11-26 22:58:26.237332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.371 [2024-11-26 22:58:26.248481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-11-26 22:58:26.292517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.371 [2024-11-26 22:58:26.292565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 BaseBdev2 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [ 00:13:47.371 { 00:13:47.371 "name": "BaseBdev2", 00:13:47.371 "aliases": [ 00:13:47.371 "2f37c195-9742-4636-9748-9fff08a6c60f" 00:13:47.371 ], 00:13:47.371 "product_name": "Malloc disk", 00:13:47.371 "block_size": 512, 00:13:47.371 "num_blocks": 65536, 00:13:47.371 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:47.371 "assigned_rate_limits": { 00:13:47.371 "rw_ios_per_sec": 0, 00:13:47.371 "rw_mbytes_per_sec": 0, 00:13:47.371 "r_mbytes_per_sec": 0, 00:13:47.371 "w_mbytes_per_sec": 0 00:13:47.371 }, 00:13:47.371 "claimed": false, 00:13:47.371 "zoned": false, 00:13:47.371 "supported_io_types": { 00:13:47.371 "read": true, 00:13:47.371 "write": true, 00:13:47.371 "unmap": true, 00:13:47.371 "flush": true, 00:13:47.371 "reset": true, 00:13:47.371 "nvme_admin": false, 00:13:47.371 "nvme_io": false, 00:13:47.371 "nvme_io_md": false, 00:13:47.371 "write_zeroes": true, 00:13:47.371 "zcopy": true, 00:13:47.371 "get_zone_info": false, 00:13:47.371 "zone_management": false, 00:13:47.371 "zone_append": false, 00:13:47.371 "compare": false, 00:13:47.371 "compare_and_write": false, 00:13:47.371 "abort": true, 00:13:47.371 "seek_hole": false, 00:13:47.371 "seek_data": false, 00:13:47.371 "copy": true, 00:13:47.371 "nvme_iov_md": false 00:13:47.371 }, 00:13:47.371 "memory_domains": [ 00:13:47.371 { 00:13:47.371 "dma_device_id": "system", 00:13:47.371 "dma_device_type": 1 00:13:47.371 }, 00:13:47.371 { 00:13:47.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.371 "dma_device_type": 2 00:13:47.371 } 00:13:47.371 ], 00:13:47.371 "driver_specific": {} 00:13:47.371 } 00:13:47.371 ] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 BaseBdev3 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.371 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [ 00:13:47.371 { 00:13:47.371 "name": "BaseBdev3", 00:13:47.371 "aliases": [ 00:13:47.371 "29927bc1-64df-4140-b50b-72d19ab223b9" 00:13:47.371 ], 00:13:47.371 "product_name": "Malloc disk", 00:13:47.371 "block_size": 512, 00:13:47.371 "num_blocks": 65536, 00:13:47.371 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:47.371 "assigned_rate_limits": { 00:13:47.371 "rw_ios_per_sec": 0, 00:13:47.371 "rw_mbytes_per_sec": 0, 00:13:47.371 "r_mbytes_per_sec": 0, 00:13:47.371 "w_mbytes_per_sec": 0 00:13:47.371 }, 00:13:47.371 "claimed": false, 00:13:47.371 "zoned": false, 00:13:47.371 "supported_io_types": { 00:13:47.371 "read": true, 00:13:47.371 "write": true, 00:13:47.371 "unmap": true, 00:13:47.371 "flush": true, 00:13:47.371 "reset": true, 00:13:47.371 "nvme_admin": false, 00:13:47.371 "nvme_io": false, 00:13:47.371 "nvme_io_md": false, 00:13:47.371 "write_zeroes": true, 00:13:47.371 "zcopy": true, 00:13:47.371 "get_zone_info": false, 00:13:47.371 "zone_management": false, 00:13:47.371 "zone_append": false, 00:13:47.371 "compare": false, 00:13:47.371 "compare_and_write": false, 00:13:47.371 "abort": true, 00:13:47.371 "seek_hole": false, 00:13:47.371 "seek_data": false, 00:13:47.371 "copy": true, 00:13:47.371 "nvme_iov_md": false 00:13:47.371 }, 00:13:47.371 "memory_domains": [ 00:13:47.371 { 00:13:47.371 "dma_device_id": "system", 00:13:47.371 "dma_device_type": 1 00:13:47.371 }, 00:13:47.371 { 00:13:47.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.372 "dma_device_type": 2 00:13:47.372 } 00:13:47.372 ], 00:13:47.372 "driver_specific": {} 00:13:47.372 } 00:13:47.372 ] 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.372 [2024-11-26 22:58:26.463246] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.372 [2024-11-26 22:58:26.463308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.372 [2024-11-26 22:58:26.463325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.372 [2024-11-26 22:58:26.465109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.372 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.632 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.632 "name": "Existed_Raid", 00:13:47.632 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:47.632 "strip_size_kb": 64, 00:13:47.632 "state": "configuring", 00:13:47.632 "raid_level": "raid5f", 00:13:47.632 "superblock": true, 00:13:47.632 "num_base_bdevs": 3, 00:13:47.632 "num_base_bdevs_discovered": 2, 00:13:47.632 "num_base_bdevs_operational": 3, 00:13:47.632 "base_bdevs_list": [ 00:13:47.632 { 00:13:47.632 "name": "BaseBdev1", 00:13:47.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.632 "is_configured": false, 00:13:47.632 "data_offset": 0, 00:13:47.632 "data_size": 0 00:13:47.632 }, 00:13:47.632 { 00:13:47.632 "name": "BaseBdev2", 00:13:47.632 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:47.632 "is_configured": true, 00:13:47.632 "data_offset": 2048, 00:13:47.632 "data_size": 63488 00:13:47.632 }, 00:13:47.632 { 00:13:47.632 "name": "BaseBdev3", 00:13:47.632 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:47.632 "is_configured": true, 00:13:47.632 "data_offset": 2048, 00:13:47.632 "data_size": 63488 00:13:47.632 } 00:13:47.632 ] 00:13:47.632 }' 00:13:47.632 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.632 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.892 [2024-11-26 22:58:26.911348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.892 "name": "Existed_Raid", 00:13:47.892 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:47.892 "strip_size_kb": 64, 00:13:47.892 "state": "configuring", 00:13:47.892 "raid_level": "raid5f", 00:13:47.892 "superblock": true, 00:13:47.892 "num_base_bdevs": 3, 00:13:47.892 "num_base_bdevs_discovered": 1, 00:13:47.892 "num_base_bdevs_operational": 3, 00:13:47.892 "base_bdevs_list": [ 00:13:47.892 { 00:13:47.892 "name": "BaseBdev1", 00:13:47.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.892 "is_configured": false, 00:13:47.892 "data_offset": 0, 00:13:47.892 "data_size": 0 00:13:47.892 }, 00:13:47.892 { 00:13:47.892 "name": null, 00:13:47.892 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:47.892 "is_configured": false, 00:13:47.892 "data_offset": 0, 00:13:47.892 "data_size": 63488 00:13:47.892 }, 00:13:47.892 { 00:13:47.892 "name": "BaseBdev3", 00:13:47.892 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:47.892 "is_configured": true, 00:13:47.892 "data_offset": 2048, 00:13:47.892 "data_size": 63488 00:13:47.892 } 00:13:47.892 ] 00:13:47.892 }' 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.892 22:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.462 [2024-11-26 22:58:27.418352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.462 BaseBdev1 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.462 [ 00:13:48.462 { 00:13:48.462 "name": "BaseBdev1", 00:13:48.462 "aliases": [ 00:13:48.462 "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057" 00:13:48.462 ], 00:13:48.462 "product_name": "Malloc disk", 00:13:48.462 "block_size": 512, 00:13:48.462 "num_blocks": 65536, 00:13:48.462 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:48.462 "assigned_rate_limits": { 00:13:48.462 "rw_ios_per_sec": 0, 00:13:48.462 "rw_mbytes_per_sec": 0, 00:13:48.462 "r_mbytes_per_sec": 0, 00:13:48.462 "w_mbytes_per_sec": 0 00:13:48.462 }, 00:13:48.462 "claimed": true, 00:13:48.462 "claim_type": "exclusive_write", 00:13:48.462 "zoned": false, 00:13:48.462 "supported_io_types": { 00:13:48.462 "read": true, 00:13:48.462 "write": true, 00:13:48.462 "unmap": true, 00:13:48.462 "flush": true, 00:13:48.462 "reset": true, 00:13:48.462 "nvme_admin": false, 00:13:48.462 "nvme_io": false, 00:13:48.462 "nvme_io_md": false, 00:13:48.462 "write_zeroes": true, 00:13:48.462 "zcopy": true, 00:13:48.462 "get_zone_info": false, 00:13:48.462 "zone_management": false, 00:13:48.462 "zone_append": false, 00:13:48.462 "compare": false, 00:13:48.462 "compare_and_write": false, 00:13:48.462 "abort": true, 00:13:48.462 "seek_hole": false, 00:13:48.462 "seek_data": false, 00:13:48.462 "copy": true, 00:13:48.462 "nvme_iov_md": false 00:13:48.462 }, 00:13:48.462 "memory_domains": [ 00:13:48.462 { 00:13:48.462 "dma_device_id": "system", 00:13:48.462 "dma_device_type": 1 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.462 "dma_device_type": 2 00:13:48.462 } 00:13:48.462 ], 00:13:48.462 "driver_specific": {} 00:13:48.462 } 00:13:48.462 ] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:48.462 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.463 "name": "Existed_Raid", 00:13:48.463 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:48.463 "strip_size_kb": 64, 00:13:48.463 "state": "configuring", 00:13:48.463 "raid_level": "raid5f", 00:13:48.463 "superblock": true, 00:13:48.463 "num_base_bdevs": 3, 00:13:48.463 "num_base_bdevs_discovered": 2, 00:13:48.463 "num_base_bdevs_operational": 3, 00:13:48.463 "base_bdevs_list": [ 00:13:48.463 { 00:13:48.463 "name": "BaseBdev1", 00:13:48.463 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:48.463 "is_configured": true, 00:13:48.463 "data_offset": 2048, 00:13:48.463 "data_size": 63488 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "name": null, 00:13:48.463 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:48.463 "is_configured": false, 00:13:48.463 "data_offset": 0, 00:13:48.463 "data_size": 63488 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "name": "BaseBdev3", 00:13:48.463 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:48.463 "is_configured": true, 00:13:48.463 "data_offset": 2048, 00:13:48.463 "data_size": 63488 00:13:48.463 } 00:13:48.463 ] 00:13:48.463 }' 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.463 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.033 [2024-11-26 22:58:27.890515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.033 "name": "Existed_Raid", 00:13:49.033 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:49.033 "strip_size_kb": 64, 00:13:49.033 "state": "configuring", 00:13:49.033 "raid_level": "raid5f", 00:13:49.033 "superblock": true, 00:13:49.033 "num_base_bdevs": 3, 00:13:49.033 "num_base_bdevs_discovered": 1, 00:13:49.033 "num_base_bdevs_operational": 3, 00:13:49.033 "base_bdevs_list": [ 00:13:49.033 { 00:13:49.033 "name": "BaseBdev1", 00:13:49.033 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:49.033 "is_configured": true, 00:13:49.033 "data_offset": 2048, 00:13:49.033 "data_size": 63488 00:13:49.033 }, 00:13:49.033 { 00:13:49.033 "name": null, 00:13:49.033 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:49.033 "is_configured": false, 00:13:49.033 "data_offset": 0, 00:13:49.033 "data_size": 63488 00:13:49.033 }, 00:13:49.033 { 00:13:49.033 "name": null, 00:13:49.033 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:49.033 "is_configured": false, 00:13:49.033 "data_offset": 0, 00:13:49.033 "data_size": 63488 00:13:49.033 } 00:13:49.033 ] 00:13:49.033 }' 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.033 22:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.293 [2024-11-26 22:58:28.382676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.293 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.294 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.553 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.553 "name": "Existed_Raid", 00:13:49.553 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:49.553 "strip_size_kb": 64, 00:13:49.553 "state": "configuring", 00:13:49.553 "raid_level": "raid5f", 00:13:49.553 "superblock": true, 00:13:49.553 "num_base_bdevs": 3, 00:13:49.553 "num_base_bdevs_discovered": 2, 00:13:49.553 "num_base_bdevs_operational": 3, 00:13:49.553 "base_bdevs_list": [ 00:13:49.553 { 00:13:49.553 "name": "BaseBdev1", 00:13:49.553 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:49.553 "is_configured": true, 00:13:49.553 "data_offset": 2048, 00:13:49.553 "data_size": 63488 00:13:49.553 }, 00:13:49.553 { 00:13:49.553 "name": null, 00:13:49.553 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:49.553 "is_configured": false, 00:13:49.553 "data_offset": 0, 00:13:49.553 "data_size": 63488 00:13:49.553 }, 00:13:49.553 { 00:13:49.553 "name": "BaseBdev3", 00:13:49.553 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:49.553 "is_configured": true, 00:13:49.553 "data_offset": 2048, 00:13:49.553 "data_size": 63488 00:13:49.553 } 00:13:49.553 ] 00:13:49.553 }' 00:13:49.553 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.553 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.814 [2024-11-26 22:58:28.838837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.814 "name": "Existed_Raid", 00:13:49.814 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:49.814 "strip_size_kb": 64, 00:13:49.814 "state": "configuring", 00:13:49.814 "raid_level": "raid5f", 00:13:49.814 "superblock": true, 00:13:49.814 "num_base_bdevs": 3, 00:13:49.814 "num_base_bdevs_discovered": 1, 00:13:49.814 "num_base_bdevs_operational": 3, 00:13:49.814 "base_bdevs_list": [ 00:13:49.814 { 00:13:49.814 "name": null, 00:13:49.814 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:49.814 "is_configured": false, 00:13:49.814 "data_offset": 0, 00:13:49.814 "data_size": 63488 00:13:49.814 }, 00:13:49.814 { 00:13:49.814 "name": null, 00:13:49.814 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:49.814 "is_configured": false, 00:13:49.814 "data_offset": 0, 00:13:49.814 "data_size": 63488 00:13:49.814 }, 00:13:49.814 { 00:13:49.814 "name": "BaseBdev3", 00:13:49.814 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:49.814 "is_configured": true, 00:13:49.814 "data_offset": 2048, 00:13:49.814 "data_size": 63488 00:13:49.814 } 00:13:49.814 ] 00:13:49.814 }' 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.814 22:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.382 [2024-11-26 22:58:29.349243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.382 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.383 "name": "Existed_Raid", 00:13:50.383 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:50.383 "strip_size_kb": 64, 00:13:50.383 "state": "configuring", 00:13:50.383 "raid_level": "raid5f", 00:13:50.383 "superblock": true, 00:13:50.383 "num_base_bdevs": 3, 00:13:50.383 "num_base_bdevs_discovered": 2, 00:13:50.383 "num_base_bdevs_operational": 3, 00:13:50.383 "base_bdevs_list": [ 00:13:50.383 { 00:13:50.383 "name": null, 00:13:50.383 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:50.383 "is_configured": false, 00:13:50.383 "data_offset": 0, 00:13:50.383 "data_size": 63488 00:13:50.383 }, 00:13:50.383 { 00:13:50.383 "name": "BaseBdev2", 00:13:50.383 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:50.383 "is_configured": true, 00:13:50.383 "data_offset": 2048, 00:13:50.383 "data_size": 63488 00:13:50.383 }, 00:13:50.383 { 00:13:50.383 "name": "BaseBdev3", 00:13:50.383 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:50.383 "is_configured": true, 00:13:50.383 "data_offset": 2048, 00:13:50.383 "data_size": 63488 00:13:50.383 } 00:13:50.383 ] 00:13:50.383 }' 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.383 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2fd3fe4d-70ef-4e63-8e74-ea260ea0b057 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 [2024-11-26 22:58:29.891623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:50.951 [2024-11-26 22:58:29.891800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:50.951 [2024-11-26 22:58:29.891812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:50.951 [2024-11-26 22:58:29.892063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:50.951 NewBaseBdev 00:13:50.951 [2024-11-26 22:58:29.892452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:50.951 [2024-11-26 22:58:29.892476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:50.951 [2024-11-26 22:58:29.892570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.951 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.952 [ 00:13:50.952 { 00:13:50.952 "name": "NewBaseBdev", 00:13:50.952 "aliases": [ 00:13:50.952 "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057" 00:13:50.952 ], 00:13:50.952 "product_name": "Malloc disk", 00:13:50.952 "block_size": 512, 00:13:50.952 "num_blocks": 65536, 00:13:50.952 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:50.952 "assigned_rate_limits": { 00:13:50.952 "rw_ios_per_sec": 0, 00:13:50.952 "rw_mbytes_per_sec": 0, 00:13:50.952 "r_mbytes_per_sec": 0, 00:13:50.952 "w_mbytes_per_sec": 0 00:13:50.952 }, 00:13:50.952 "claimed": true, 00:13:50.952 "claim_type": "exclusive_write", 00:13:50.952 "zoned": false, 00:13:50.952 "supported_io_types": { 00:13:50.952 "read": true, 00:13:50.952 "write": true, 00:13:50.952 "unmap": true, 00:13:50.952 "flush": true, 00:13:50.952 "reset": true, 00:13:50.952 "nvme_admin": false, 00:13:50.952 "nvme_io": false, 00:13:50.952 "nvme_io_md": false, 00:13:50.952 "write_zeroes": true, 00:13:50.952 "zcopy": true, 00:13:50.952 "get_zone_info": false, 00:13:50.952 "zone_management": false, 00:13:50.952 "zone_append": false, 00:13:50.952 "compare": false, 00:13:50.952 "compare_and_write": false, 00:13:50.952 "abort": true, 00:13:50.952 "seek_hole": false, 00:13:50.952 "seek_data": false, 00:13:50.952 "copy": true, 00:13:50.952 "nvme_iov_md": false 00:13:50.952 }, 00:13:50.952 "memory_domains": [ 00:13:50.952 { 00:13:50.952 "dma_device_id": "system", 00:13:50.952 "dma_device_type": 1 00:13:50.952 }, 00:13:50.952 { 00:13:50.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.952 "dma_device_type": 2 00:13:50.952 } 00:13:50.952 ], 00:13:50.952 "driver_specific": {} 00:13:50.952 } 00:13:50.952 ] 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.952 "name": "Existed_Raid", 00:13:50.952 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:50.952 "strip_size_kb": 64, 00:13:50.952 "state": "online", 00:13:50.952 "raid_level": "raid5f", 00:13:50.952 "superblock": true, 00:13:50.952 "num_base_bdevs": 3, 00:13:50.952 "num_base_bdevs_discovered": 3, 00:13:50.952 "num_base_bdevs_operational": 3, 00:13:50.952 "base_bdevs_list": [ 00:13:50.952 { 00:13:50.952 "name": "NewBaseBdev", 00:13:50.952 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:50.952 "is_configured": true, 00:13:50.952 "data_offset": 2048, 00:13:50.952 "data_size": 63488 00:13:50.952 }, 00:13:50.952 { 00:13:50.952 "name": "BaseBdev2", 00:13:50.952 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:50.952 "is_configured": true, 00:13:50.952 "data_offset": 2048, 00:13:50.952 "data_size": 63488 00:13:50.952 }, 00:13:50.952 { 00:13:50.952 "name": "BaseBdev3", 00:13:50.952 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:50.952 "is_configured": true, 00:13:50.952 "data_offset": 2048, 00:13:50.952 "data_size": 63488 00:13:50.952 } 00:13:50.952 ] 00:13:50.952 }' 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.952 22:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.520 [2024-11-26 22:58:30.375952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.520 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.520 "name": "Existed_Raid", 00:13:51.520 "aliases": [ 00:13:51.520 "4988cea2-f6fa-4489-bcae-5bee5bd3531d" 00:13:51.520 ], 00:13:51.520 "product_name": "Raid Volume", 00:13:51.521 "block_size": 512, 00:13:51.521 "num_blocks": 126976, 00:13:51.521 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:51.521 "assigned_rate_limits": { 00:13:51.521 "rw_ios_per_sec": 0, 00:13:51.521 "rw_mbytes_per_sec": 0, 00:13:51.521 "r_mbytes_per_sec": 0, 00:13:51.521 "w_mbytes_per_sec": 0 00:13:51.521 }, 00:13:51.521 "claimed": false, 00:13:51.521 "zoned": false, 00:13:51.521 "supported_io_types": { 00:13:51.521 "read": true, 00:13:51.521 "write": true, 00:13:51.521 "unmap": false, 00:13:51.521 "flush": false, 00:13:51.521 "reset": true, 00:13:51.521 "nvme_admin": false, 00:13:51.521 "nvme_io": false, 00:13:51.521 "nvme_io_md": false, 00:13:51.521 "write_zeroes": true, 00:13:51.521 "zcopy": false, 00:13:51.521 "get_zone_info": false, 00:13:51.521 "zone_management": false, 00:13:51.521 "zone_append": false, 00:13:51.521 "compare": false, 00:13:51.521 "compare_and_write": false, 00:13:51.521 "abort": false, 00:13:51.521 "seek_hole": false, 00:13:51.521 "seek_data": false, 00:13:51.521 "copy": false, 00:13:51.521 "nvme_iov_md": false 00:13:51.521 }, 00:13:51.521 "driver_specific": { 00:13:51.521 "raid": { 00:13:51.521 "uuid": "4988cea2-f6fa-4489-bcae-5bee5bd3531d", 00:13:51.521 "strip_size_kb": 64, 00:13:51.521 "state": "online", 00:13:51.521 "raid_level": "raid5f", 00:13:51.521 "superblock": true, 00:13:51.521 "num_base_bdevs": 3, 00:13:51.521 "num_base_bdevs_discovered": 3, 00:13:51.521 "num_base_bdevs_operational": 3, 00:13:51.521 "base_bdevs_list": [ 00:13:51.521 { 00:13:51.521 "name": "NewBaseBdev", 00:13:51.521 "uuid": "2fd3fe4d-70ef-4e63-8e74-ea260ea0b057", 00:13:51.521 "is_configured": true, 00:13:51.521 "data_offset": 2048, 00:13:51.521 "data_size": 63488 00:13:51.521 }, 00:13:51.521 { 00:13:51.521 "name": "BaseBdev2", 00:13:51.521 "uuid": "2f37c195-9742-4636-9748-9fff08a6c60f", 00:13:51.521 "is_configured": true, 00:13:51.521 "data_offset": 2048, 00:13:51.521 "data_size": 63488 00:13:51.521 }, 00:13:51.521 { 00:13:51.521 "name": "BaseBdev3", 00:13:51.521 "uuid": "29927bc1-64df-4140-b50b-72d19ab223b9", 00:13:51.521 "is_configured": true, 00:13:51.521 "data_offset": 2048, 00:13:51.521 "data_size": 63488 00:13:51.521 } 00:13:51.521 ] 00:13:51.521 } 00:13:51.521 } 00:13:51.521 }' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:51.521 BaseBdev2 00:13:51.521 BaseBdev3' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.521 [2024-11-26 22:58:30.623841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.521 [2024-11-26 22:58:30.623868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.521 [2024-11-26 22:58:30.623920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.521 [2024-11-26 22:58:30.624143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.521 [2024-11-26 22:58:30.624162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92702 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92702 ']' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 92702 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.521 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92702 00:13:51.782 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.782 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.782 killing process with pid 92702 00:13:51.782 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92702' 00:13:51.782 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 92702 00:13:51.782 [2024-11-26 22:58:30.673942] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.782 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 92702 00:13:51.782 [2024-11-26 22:58:30.704646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.042 22:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:52.042 00:13:52.042 real 0m8.737s 00:13:52.042 user 0m14.785s 00:13:52.042 sys 0m1.974s 00:13:52.042 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.042 22:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.042 ************************************ 00:13:52.042 END TEST raid5f_state_function_test_sb 00:13:52.042 ************************************ 00:13:52.042 22:58:30 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:52.042 22:58:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:52.042 22:58:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.042 22:58:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.042 ************************************ 00:13:52.042 START TEST raid5f_superblock_test 00:13:52.042 ************************************ 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:52.042 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93306 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93306 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93306 ']' 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.043 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.043 [2024-11-26 22:58:31.100125] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:52.043 [2024-11-26 22:58:31.100232] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93306 ] 00:13:52.302 [2024-11-26 22:58:31.234002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:52.302 [2024-11-26 22:58:31.272074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.302 [2024-11-26 22:58:31.298730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.302 [2024-11-26 22:58:31.341668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.302 [2024-11-26 22:58:31.341718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.870 malloc1 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.870 [2024-11-26 22:58:31.930636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.870 [2024-11-26 22:58:31.930695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.870 [2024-11-26 22:58:31.930719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.870 [2024-11-26 22:58:31.930728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.870 [2024-11-26 22:58:31.932748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.870 [2024-11-26 22:58:31.932787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.870 pt1 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:52.870 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 malloc2 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 [2024-11-26 22:58:31.959109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.871 [2024-11-26 22:58:31.959158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.871 [2024-11-26 22:58:31.959177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.871 [2024-11-26 22:58:31.959185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.871 [2024-11-26 22:58:31.961062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.871 [2024-11-26 22:58:31.961097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.871 pt2 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 malloc3 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 [2024-11-26 22:58:31.987596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:52.871 [2024-11-26 22:58:31.987659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.871 [2024-11-26 22:58:31.987679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.871 [2024-11-26 22:58:31.987687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.871 [2024-11-26 22:58:31.989583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.871 [2024-11-26 22:58:31.989616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:52.871 pt3 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.871 22:58:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.130 [2024-11-26 22:58:31.999658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.130 [2024-11-26 22:58:32.001441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:53.130 [2024-11-26 22:58:32.001503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:53.130 [2024-11-26 22:58:32.001660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:53.130 [2024-11-26 22:58:32.001682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:53.130 [2024-11-26 22:58:32.001922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:53.130 [2024-11-26 22:58:32.002339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:53.131 [2024-11-26 22:58:32.002361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:53.131 [2024-11-26 22:58:32.002470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.131 "name": "raid_bdev1", 00:13:53.131 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:53.131 "strip_size_kb": 64, 00:13:53.131 "state": "online", 00:13:53.131 "raid_level": "raid5f", 00:13:53.131 "superblock": true, 00:13:53.131 "num_base_bdevs": 3, 00:13:53.131 "num_base_bdevs_discovered": 3, 00:13:53.131 "num_base_bdevs_operational": 3, 00:13:53.131 "base_bdevs_list": [ 00:13:53.131 { 00:13:53.131 "name": "pt1", 00:13:53.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.131 "is_configured": true, 00:13:53.131 "data_offset": 2048, 00:13:53.131 "data_size": 63488 00:13:53.131 }, 00:13:53.131 { 00:13:53.131 "name": "pt2", 00:13:53.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.131 "is_configured": true, 00:13:53.131 "data_offset": 2048, 00:13:53.131 "data_size": 63488 00:13:53.131 }, 00:13:53.131 { 00:13:53.131 "name": "pt3", 00:13:53.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.131 "is_configured": true, 00:13:53.131 "data_offset": 2048, 00:13:53.131 "data_size": 63488 00:13:53.131 } 00:13:53.131 ] 00:13:53.131 }' 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.131 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.390 [2024-11-26 22:58:32.456103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.390 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.390 "name": "raid_bdev1", 00:13:53.390 "aliases": [ 00:13:53.390 "5375885d-edf1-4eed-8409-9017633ee69f" 00:13:53.390 ], 00:13:53.390 "product_name": "Raid Volume", 00:13:53.390 "block_size": 512, 00:13:53.390 "num_blocks": 126976, 00:13:53.390 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:53.390 "assigned_rate_limits": { 00:13:53.390 "rw_ios_per_sec": 0, 00:13:53.390 "rw_mbytes_per_sec": 0, 00:13:53.390 "r_mbytes_per_sec": 0, 00:13:53.390 "w_mbytes_per_sec": 0 00:13:53.390 }, 00:13:53.390 "claimed": false, 00:13:53.390 "zoned": false, 00:13:53.390 "supported_io_types": { 00:13:53.390 "read": true, 00:13:53.390 "write": true, 00:13:53.390 "unmap": false, 00:13:53.390 "flush": false, 00:13:53.390 "reset": true, 00:13:53.390 "nvme_admin": false, 00:13:53.390 "nvme_io": false, 00:13:53.390 "nvme_io_md": false, 00:13:53.390 "write_zeroes": true, 00:13:53.390 "zcopy": false, 00:13:53.390 "get_zone_info": false, 00:13:53.390 "zone_management": false, 00:13:53.390 "zone_append": false, 00:13:53.390 "compare": false, 00:13:53.390 "compare_and_write": false, 00:13:53.390 "abort": false, 00:13:53.390 "seek_hole": false, 00:13:53.390 "seek_data": false, 00:13:53.390 "copy": false, 00:13:53.390 "nvme_iov_md": false 00:13:53.390 }, 00:13:53.390 "driver_specific": { 00:13:53.390 "raid": { 00:13:53.390 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:53.390 "strip_size_kb": 64, 00:13:53.391 "state": "online", 00:13:53.391 "raid_level": "raid5f", 00:13:53.391 "superblock": true, 00:13:53.391 "num_base_bdevs": 3, 00:13:53.391 "num_base_bdevs_discovered": 3, 00:13:53.391 "num_base_bdevs_operational": 3, 00:13:53.391 "base_bdevs_list": [ 00:13:53.391 { 00:13:53.391 "name": "pt1", 00:13:53.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.391 "is_configured": true, 00:13:53.391 "data_offset": 2048, 00:13:53.391 "data_size": 63488 00:13:53.391 }, 00:13:53.391 { 00:13:53.391 "name": "pt2", 00:13:53.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.391 "is_configured": true, 00:13:53.391 "data_offset": 2048, 00:13:53.391 "data_size": 63488 00:13:53.391 }, 00:13:53.391 { 00:13:53.391 "name": "pt3", 00:13:53.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.391 "is_configured": true, 00:13:53.391 "data_offset": 2048, 00:13:53.391 "data_size": 63488 00:13:53.391 } 00:13:53.391 ] 00:13:53.391 } 00:13:53.391 } 00:13:53.391 }' 00:13:53.391 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:53.650 pt2 00:13:53.650 pt3' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.650 [2024-11-26 22:58:32.752177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5375885d-edf1-4eed-8409-9017633ee69f 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5375885d-edf1-4eed-8409-9017633ee69f ']' 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.650 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 [2024-11-26 22:58:32.780017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.909 [2024-11-26 22:58:32.780052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.909 [2024-11-26 22:58:32.780112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.909 [2024-11-26 22:58:32.780179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.909 [2024-11-26 22:58:32.780193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.909 [2024-11-26 22:58:32.932086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:53.909 [2024-11-26 22:58:32.933815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:53.909 [2024-11-26 22:58:32.933862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:53.909 [2024-11-26 22:58:32.933900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:53.909 [2024-11-26 22:58:32.933937] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:53.909 [2024-11-26 22:58:32.933953] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:53.909 [2024-11-26 22:58:32.933965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.909 [2024-11-26 22:58:32.933973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:13:53.909 request: 00:13:53.909 { 00:13:53.909 "name": "raid_bdev1", 00:13:53.909 "raid_level": "raid5f", 00:13:53.909 "base_bdevs": [ 00:13:53.909 "malloc1", 00:13:53.909 "malloc2", 00:13:53.909 "malloc3" 00:13:53.909 ], 00:13:53.909 "strip_size_kb": 64, 00:13:53.909 "superblock": false, 00:13:53.909 "method": "bdev_raid_create", 00:13:53.909 "req_id": 1 00:13:53.909 } 00:13:53.909 Got JSON-RPC error response 00:13:53.909 response: 00:13:53.909 { 00:13:53.909 "code": -17, 00:13:53.909 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:53.909 } 00:13:53.909 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.910 22:58:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.910 [2024-11-26 22:58:33.000073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.910 [2024-11-26 22:58:33.000132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.910 [2024-11-26 22:58:33.000148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:53.910 [2024-11-26 22:58:33.000156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.910 [2024-11-26 22:58:33.002122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.910 [2024-11-26 22:58:33.002156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.910 [2024-11-26 22:58:33.002226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:53.910 [2024-11-26 22:58:33.002274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.910 pt1 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.910 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.168 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.168 "name": "raid_bdev1", 00:13:54.168 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:54.168 "strip_size_kb": 64, 00:13:54.168 "state": "configuring", 00:13:54.168 "raid_level": "raid5f", 00:13:54.168 "superblock": true, 00:13:54.168 "num_base_bdevs": 3, 00:13:54.168 "num_base_bdevs_discovered": 1, 00:13:54.168 "num_base_bdevs_operational": 3, 00:13:54.168 "base_bdevs_list": [ 00:13:54.168 { 00:13:54.168 "name": "pt1", 00:13:54.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.168 "is_configured": true, 00:13:54.168 "data_offset": 2048, 00:13:54.168 "data_size": 63488 00:13:54.168 }, 00:13:54.168 { 00:13:54.168 "name": null, 00:13:54.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.169 "is_configured": false, 00:13:54.169 "data_offset": 2048, 00:13:54.169 "data_size": 63488 00:13:54.169 }, 00:13:54.169 { 00:13:54.169 "name": null, 00:13:54.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.169 "is_configured": false, 00:13:54.169 "data_offset": 2048, 00:13:54.169 "data_size": 63488 00:13:54.169 } 00:13:54.169 ] 00:13:54.169 }' 00:13:54.169 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.169 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.427 [2024-11-26 22:58:33.424193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.427 [2024-11-26 22:58:33.424243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.427 [2024-11-26 22:58:33.424271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:54.427 [2024-11-26 22:58:33.424278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.427 [2024-11-26 22:58:33.424562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.427 [2024-11-26 22:58:33.424583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.427 [2024-11-26 22:58:33.424633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.427 [2024-11-26 22:58:33.424654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.427 pt2 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.427 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.427 [2024-11-26 22:58:33.436237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.428 "name": "raid_bdev1", 00:13:54.428 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:54.428 "strip_size_kb": 64, 00:13:54.428 "state": "configuring", 00:13:54.428 "raid_level": "raid5f", 00:13:54.428 "superblock": true, 00:13:54.428 "num_base_bdevs": 3, 00:13:54.428 "num_base_bdevs_discovered": 1, 00:13:54.428 "num_base_bdevs_operational": 3, 00:13:54.428 "base_bdevs_list": [ 00:13:54.428 { 00:13:54.428 "name": "pt1", 00:13:54.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.428 "is_configured": true, 00:13:54.428 "data_offset": 2048, 00:13:54.428 "data_size": 63488 00:13:54.428 }, 00:13:54.428 { 00:13:54.428 "name": null, 00:13:54.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.428 "is_configured": false, 00:13:54.428 "data_offset": 0, 00:13:54.428 "data_size": 63488 00:13:54.428 }, 00:13:54.428 { 00:13:54.428 "name": null, 00:13:54.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.428 "is_configured": false, 00:13:54.428 "data_offset": 2048, 00:13:54.428 "data_size": 63488 00:13:54.428 } 00:13:54.428 ] 00:13:54.428 }' 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.428 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.994 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:54.994 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.994 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.994 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.994 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.994 [2024-11-26 22:58:33.900343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.994 [2024-11-26 22:58:33.900391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.994 [2024-11-26 22:58:33.900404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:54.994 [2024-11-26 22:58:33.900413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.994 [2024-11-26 22:58:33.900702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.994 [2024-11-26 22:58:33.900730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.995 [2024-11-26 22:58:33.900778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.995 [2024-11-26 22:58:33.900797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.995 pt2 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.995 [2024-11-26 22:58:33.912332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:54.995 [2024-11-26 22:58:33.912392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.995 [2024-11-26 22:58:33.912404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:54.995 [2024-11-26 22:58:33.912413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.995 [2024-11-26 22:58:33.912685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.995 [2024-11-26 22:58:33.912709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:54.995 [2024-11-26 22:58:33.912754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:54.995 [2024-11-26 22:58:33.912770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:54.995 [2024-11-26 22:58:33.912858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:54.995 [2024-11-26 22:58:33.912871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:54.995 [2024-11-26 22:58:33.913078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:54.995 [2024-11-26 22:58:33.913449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:54.995 [2024-11-26 22:58:33.913467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:54.995 [2024-11-26 22:58:33.913553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.995 pt3 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.995 "name": "raid_bdev1", 00:13:54.995 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:54.995 "strip_size_kb": 64, 00:13:54.995 "state": "online", 00:13:54.995 "raid_level": "raid5f", 00:13:54.995 "superblock": true, 00:13:54.995 "num_base_bdevs": 3, 00:13:54.995 "num_base_bdevs_discovered": 3, 00:13:54.995 "num_base_bdevs_operational": 3, 00:13:54.995 "base_bdevs_list": [ 00:13:54.995 { 00:13:54.995 "name": "pt1", 00:13:54.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.995 "is_configured": true, 00:13:54.995 "data_offset": 2048, 00:13:54.995 "data_size": 63488 00:13:54.995 }, 00:13:54.995 { 00:13:54.995 "name": "pt2", 00:13:54.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.995 "is_configured": true, 00:13:54.995 "data_offset": 2048, 00:13:54.995 "data_size": 63488 00:13:54.995 }, 00:13:54.995 { 00:13:54.995 "name": "pt3", 00:13:54.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.995 "is_configured": true, 00:13:54.995 "data_offset": 2048, 00:13:54.995 "data_size": 63488 00:13:54.995 } 00:13:54.995 ] 00:13:54.995 }' 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.995 22:58:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.255 [2024-11-26 22:58:34.348633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.255 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.515 "name": "raid_bdev1", 00:13:55.515 "aliases": [ 00:13:55.515 "5375885d-edf1-4eed-8409-9017633ee69f" 00:13:55.515 ], 00:13:55.515 "product_name": "Raid Volume", 00:13:55.515 "block_size": 512, 00:13:55.515 "num_blocks": 126976, 00:13:55.515 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:55.515 "assigned_rate_limits": { 00:13:55.515 "rw_ios_per_sec": 0, 00:13:55.515 "rw_mbytes_per_sec": 0, 00:13:55.515 "r_mbytes_per_sec": 0, 00:13:55.515 "w_mbytes_per_sec": 0 00:13:55.515 }, 00:13:55.515 "claimed": false, 00:13:55.515 "zoned": false, 00:13:55.515 "supported_io_types": { 00:13:55.515 "read": true, 00:13:55.515 "write": true, 00:13:55.515 "unmap": false, 00:13:55.515 "flush": false, 00:13:55.515 "reset": true, 00:13:55.515 "nvme_admin": false, 00:13:55.515 "nvme_io": false, 00:13:55.515 "nvme_io_md": false, 00:13:55.515 "write_zeroes": true, 00:13:55.515 "zcopy": false, 00:13:55.515 "get_zone_info": false, 00:13:55.515 "zone_management": false, 00:13:55.515 "zone_append": false, 00:13:55.515 "compare": false, 00:13:55.515 "compare_and_write": false, 00:13:55.515 "abort": false, 00:13:55.515 "seek_hole": false, 00:13:55.515 "seek_data": false, 00:13:55.515 "copy": false, 00:13:55.515 "nvme_iov_md": false 00:13:55.515 }, 00:13:55.515 "driver_specific": { 00:13:55.515 "raid": { 00:13:55.515 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:55.515 "strip_size_kb": 64, 00:13:55.515 "state": "online", 00:13:55.515 "raid_level": "raid5f", 00:13:55.515 "superblock": true, 00:13:55.515 "num_base_bdevs": 3, 00:13:55.515 "num_base_bdevs_discovered": 3, 00:13:55.515 "num_base_bdevs_operational": 3, 00:13:55.515 "base_bdevs_list": [ 00:13:55.515 { 00:13:55.515 "name": "pt1", 00:13:55.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.515 "is_configured": true, 00:13:55.515 "data_offset": 2048, 00:13:55.515 "data_size": 63488 00:13:55.515 }, 00:13:55.515 { 00:13:55.515 "name": "pt2", 00:13:55.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.515 "is_configured": true, 00:13:55.515 "data_offset": 2048, 00:13:55.515 "data_size": 63488 00:13:55.515 }, 00:13:55.515 { 00:13:55.515 "name": "pt3", 00:13:55.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.515 "is_configured": true, 00:13:55.515 "data_offset": 2048, 00:13:55.515 "data_size": 63488 00:13:55.515 } 00:13:55.515 ] 00:13:55.515 } 00:13:55.515 } 00:13:55.515 }' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.515 pt2 00:13:55.515 pt3' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.515 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.516 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.776 [2024-11-26 22:58:34.652699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5375885d-edf1-4eed-8409-9017633ee69f '!=' 5375885d-edf1-4eed-8409-9017633ee69f ']' 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.776 [2024-11-26 22:58:34.680571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.776 "name": "raid_bdev1", 00:13:55.776 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:55.776 "strip_size_kb": 64, 00:13:55.776 "state": "online", 00:13:55.776 "raid_level": "raid5f", 00:13:55.776 "superblock": true, 00:13:55.776 "num_base_bdevs": 3, 00:13:55.776 "num_base_bdevs_discovered": 2, 00:13:55.776 "num_base_bdevs_operational": 2, 00:13:55.776 "base_bdevs_list": [ 00:13:55.776 { 00:13:55.776 "name": null, 00:13:55.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.776 "is_configured": false, 00:13:55.776 "data_offset": 0, 00:13:55.776 "data_size": 63488 00:13:55.776 }, 00:13:55.776 { 00:13:55.776 "name": "pt2", 00:13:55.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.776 "is_configured": true, 00:13:55.776 "data_offset": 2048, 00:13:55.776 "data_size": 63488 00:13:55.776 }, 00:13:55.776 { 00:13:55.776 "name": "pt3", 00:13:55.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.776 "is_configured": true, 00:13:55.776 "data_offset": 2048, 00:13:55.776 "data_size": 63488 00:13:55.776 } 00:13:55.776 ] 00:13:55.776 }' 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.776 22:58:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.036 [2024-11-26 22:58:35.100648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.036 [2024-11-26 22:58:35.100672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.036 [2024-11-26 22:58:35.100734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.036 [2024-11-26 22:58:35.100776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.036 [2024-11-26 22:58:35.100786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:56.036 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.296 [2024-11-26 22:58:35.168681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:56.296 [2024-11-26 22:58:35.168729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.296 [2024-11-26 22:58:35.168744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:56.296 [2024-11-26 22:58:35.168754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.296 [2024-11-26 22:58:35.170697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.296 [2024-11-26 22:58:35.170737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:56.296 [2024-11-26 22:58:35.170791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:56.296 [2024-11-26 22:58:35.170822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.296 pt2 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.296 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.297 "name": "raid_bdev1", 00:13:56.297 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:56.297 "strip_size_kb": 64, 00:13:56.297 "state": "configuring", 00:13:56.297 "raid_level": "raid5f", 00:13:56.297 "superblock": true, 00:13:56.297 "num_base_bdevs": 3, 00:13:56.297 "num_base_bdevs_discovered": 1, 00:13:56.297 "num_base_bdevs_operational": 2, 00:13:56.297 "base_bdevs_list": [ 00:13:56.297 { 00:13:56.297 "name": null, 00:13:56.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.297 "is_configured": false, 00:13:56.297 "data_offset": 2048, 00:13:56.297 "data_size": 63488 00:13:56.297 }, 00:13:56.297 { 00:13:56.297 "name": "pt2", 00:13:56.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.297 "is_configured": true, 00:13:56.297 "data_offset": 2048, 00:13:56.297 "data_size": 63488 00:13:56.297 }, 00:13:56.297 { 00:13:56.297 "name": null, 00:13:56.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.297 "is_configured": false, 00:13:56.297 "data_offset": 2048, 00:13:56.297 "data_size": 63488 00:13:56.297 } 00:13:56.297 ] 00:13:56.297 }' 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.297 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.557 [2024-11-26 22:58:35.580792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:56.557 [2024-11-26 22:58:35.580843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.557 [2024-11-26 22:58:35.580859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:56.557 [2024-11-26 22:58:35.580869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.557 [2024-11-26 22:58:35.581136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.557 [2024-11-26 22:58:35.581162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:56.557 [2024-11-26 22:58:35.581210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:56.557 [2024-11-26 22:58:35.581229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.557 [2024-11-26 22:58:35.581310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:56.557 [2024-11-26 22:58:35.581320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:56.557 [2024-11-26 22:58:35.581519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:56.557 [2024-11-26 22:58:35.581906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:56.557 [2024-11-26 22:58:35.581924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:56.557 [2024-11-26 22:58:35.582121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.557 pt3 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.557 "name": "raid_bdev1", 00:13:56.557 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:56.557 "strip_size_kb": 64, 00:13:56.557 "state": "online", 00:13:56.557 "raid_level": "raid5f", 00:13:56.557 "superblock": true, 00:13:56.557 "num_base_bdevs": 3, 00:13:56.557 "num_base_bdevs_discovered": 2, 00:13:56.557 "num_base_bdevs_operational": 2, 00:13:56.557 "base_bdevs_list": [ 00:13:56.557 { 00:13:56.557 "name": null, 00:13:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.557 "is_configured": false, 00:13:56.557 "data_offset": 2048, 00:13:56.557 "data_size": 63488 00:13:56.557 }, 00:13:56.557 { 00:13:56.557 "name": "pt2", 00:13:56.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.557 "is_configured": true, 00:13:56.557 "data_offset": 2048, 00:13:56.557 "data_size": 63488 00:13:56.557 }, 00:13:56.557 { 00:13:56.557 "name": "pt3", 00:13:56.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.557 "is_configured": true, 00:13:56.557 "data_offset": 2048, 00:13:56.557 "data_size": 63488 00:13:56.557 } 00:13:56.557 ] 00:13:56.557 }' 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.557 22:58:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 [2024-11-26 22:58:36.044894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.127 [2024-11-26 22:58:36.044922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.127 [2024-11-26 22:58:36.044967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.127 [2024-11-26 22:58:36.045010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.127 [2024-11-26 22:58:36.045018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 [2024-11-26 22:58:36.116917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.127 [2024-11-26 22:58:36.116964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.127 [2024-11-26 22:58:36.116980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:57.127 [2024-11-26 22:58:36.116988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.127 [2024-11-26 22:58:36.118964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.127 [2024-11-26 22:58:36.118999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.127 [2024-11-26 22:58:36.119050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:57.127 [2024-11-26 22:58:36.119074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.127 [2024-11-26 22:58:36.119170] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:57.127 [2024-11-26 22:58:36.119180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.127 [2024-11-26 22:58:36.119195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:13:57.127 [2024-11-26 22:58:36.119225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.127 pt1 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.127 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.127 "name": "raid_bdev1", 00:13:57.127 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:57.127 "strip_size_kb": 64, 00:13:57.127 "state": "configuring", 00:13:57.127 "raid_level": "raid5f", 00:13:57.127 "superblock": true, 00:13:57.127 "num_base_bdevs": 3, 00:13:57.127 "num_base_bdevs_discovered": 1, 00:13:57.127 "num_base_bdevs_operational": 2, 00:13:57.127 "base_bdevs_list": [ 00:13:57.127 { 00:13:57.127 "name": null, 00:13:57.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.127 "is_configured": false, 00:13:57.127 "data_offset": 2048, 00:13:57.127 "data_size": 63488 00:13:57.127 }, 00:13:57.127 { 00:13:57.127 "name": "pt2", 00:13:57.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.127 "is_configured": true, 00:13:57.127 "data_offset": 2048, 00:13:57.127 "data_size": 63488 00:13:57.127 }, 00:13:57.128 { 00:13:57.128 "name": null, 00:13:57.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.128 "is_configured": false, 00:13:57.128 "data_offset": 2048, 00:13:57.128 "data_size": 63488 00:13:57.128 } 00:13:57.128 ] 00:13:57.128 }' 00:13:57.128 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.128 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 [2024-11-26 22:58:36.637071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:57.697 [2024-11-26 22:58:36.637117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.697 [2024-11-26 22:58:36.637132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:57.697 [2024-11-26 22:58:36.637139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.697 [2024-11-26 22:58:36.637469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.697 [2024-11-26 22:58:36.637495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:57.697 [2024-11-26 22:58:36.637547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:57.697 [2024-11-26 22:58:36.637563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.697 [2024-11-26 22:58:36.637632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:57.697 [2024-11-26 22:58:36.637641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.697 [2024-11-26 22:58:36.637853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:57.697 [2024-11-26 22:58:36.638233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:57.697 [2024-11-26 22:58:36.638269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:57.697 [2024-11-26 22:58:36.638403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.697 pt3 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.697 "name": "raid_bdev1", 00:13:57.697 "uuid": "5375885d-edf1-4eed-8409-9017633ee69f", 00:13:57.697 "strip_size_kb": 64, 00:13:57.697 "state": "online", 00:13:57.697 "raid_level": "raid5f", 00:13:57.697 "superblock": true, 00:13:57.697 "num_base_bdevs": 3, 00:13:57.697 "num_base_bdevs_discovered": 2, 00:13:57.697 "num_base_bdevs_operational": 2, 00:13:57.697 "base_bdevs_list": [ 00:13:57.697 { 00:13:57.697 "name": null, 00:13:57.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.697 "is_configured": false, 00:13:57.697 "data_offset": 2048, 00:13:57.697 "data_size": 63488 00:13:57.697 }, 00:13:57.697 { 00:13:57.697 "name": "pt2", 00:13:57.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.697 "is_configured": true, 00:13:57.697 "data_offset": 2048, 00:13:57.697 "data_size": 63488 00:13:57.697 }, 00:13:57.697 { 00:13:57.697 "name": "pt3", 00:13:57.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.697 "is_configured": true, 00:13:57.697 "data_offset": 2048, 00:13:57.697 "data_size": 63488 00:13:57.697 } 00:13:57.697 ] 00:13:57.697 }' 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.697 22:58:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.957 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.957 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:57.957 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.957 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.217 [2024-11-26 22:58:37.101343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5375885d-edf1-4eed-8409-9017633ee69f '!=' 5375885d-edf1-4eed-8409-9017633ee69f ']' 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93306 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93306 ']' 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93306 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93306 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.217 killing process with pid 93306 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93306' 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93306 00:13:58.217 [2024-11-26 22:58:37.180961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.217 [2024-11-26 22:58:37.181035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.217 [2024-11-26 22:58:37.181084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.217 [2024-11-26 22:58:37.181098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:58.217 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93306 00:13:58.217 [2024-11-26 22:58:37.214020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.478 22:58:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:58.478 00:13:58.478 real 0m6.432s 00:13:58.478 user 0m10.780s 00:13:58.478 sys 0m1.380s 00:13:58.478 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.478 22:58:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.478 ************************************ 00:13:58.478 END TEST raid5f_superblock_test 00:13:58.478 ************************************ 00:13:58.478 22:58:37 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:58.478 22:58:37 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:58.478 22:58:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:58.478 22:58:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.478 22:58:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.478 ************************************ 00:13:58.478 START TEST raid5f_rebuild_test 00:13:58.478 ************************************ 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93729 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93729 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 93729 ']' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.478 22:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.739 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.739 Zero copy mechanism will not be used. 00:13:58.739 [2024-11-26 22:58:37.635338] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:58.739 [2024-11-26 22:58:37.635458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93729 ] 00:13:58.739 [2024-11-26 22:58:37.776334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.739 [2024-11-26 22:58:37.815773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.739 [2024-11-26 22:58:37.843714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.998 [2024-11-26 22:58:37.888666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.998 [2024-11-26 22:58:37.888711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 BaseBdev1_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 [2024-11-26 22:58:38.469735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.568 [2024-11-26 22:58:38.469821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.568 [2024-11-26 22:58:38.469853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.568 [2024-11-26 22:58:38.469874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.568 [2024-11-26 22:58:38.471952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.568 [2024-11-26 22:58:38.471993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.568 BaseBdev1 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 BaseBdev2_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 [2024-11-26 22:58:38.498232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.568 [2024-11-26 22:58:38.498294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.568 [2024-11-26 22:58:38.498312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.568 [2024-11-26 22:58:38.498322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.568 [2024-11-26 22:58:38.500279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.568 [2024-11-26 22:58:38.500314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.568 BaseBdev2 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 BaseBdev3_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 [2024-11-26 22:58:38.526702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:59.568 [2024-11-26 22:58:38.526769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.568 [2024-11-26 22:58:38.526789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.568 [2024-11-26 22:58:38.526800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.568 [2024-11-26 22:58:38.528728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.568 [2024-11-26 22:58:38.528765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.568 BaseBdev3 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 spare_malloc 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 spare_delay 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 [2024-11-26 22:58:38.584337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.568 [2024-11-26 22:58:38.584398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.568 [2024-11-26 22:58:38.584417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:59.568 [2024-11-26 22:58:38.584430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.568 [2024-11-26 22:58:38.586981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.568 [2024-11-26 22:58:38.587028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.568 spare 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.568 [2024-11-26 22:58:38.596370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.568 [2024-11-26 22:58:38.598028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.568 [2024-11-26 22:58:38.598090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.568 [2024-11-26 22:58:38.598160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:59.568 [2024-11-26 22:58:38.598169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:59.568 [2024-11-26 22:58:38.598440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:59.568 [2024-11-26 22:58:38.598874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:59.568 [2024-11-26 22:58:38.598897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:59.568 [2024-11-26 22:58:38.599008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.568 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.569 "name": "raid_bdev1", 00:13:59.569 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:13:59.569 "strip_size_kb": 64, 00:13:59.569 "state": "online", 00:13:59.569 "raid_level": "raid5f", 00:13:59.569 "superblock": false, 00:13:59.569 "num_base_bdevs": 3, 00:13:59.569 "num_base_bdevs_discovered": 3, 00:13:59.569 "num_base_bdevs_operational": 3, 00:13:59.569 "base_bdevs_list": [ 00:13:59.569 { 00:13:59.569 "name": "BaseBdev1", 00:13:59.569 "uuid": "3a25dc5b-9d09-5ba9-b7ed-2bcb29318fbb", 00:13:59.569 "is_configured": true, 00:13:59.569 "data_offset": 0, 00:13:59.569 "data_size": 65536 00:13:59.569 }, 00:13:59.569 { 00:13:59.569 "name": "BaseBdev2", 00:13:59.569 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:13:59.569 "is_configured": true, 00:13:59.569 "data_offset": 0, 00:13:59.569 "data_size": 65536 00:13:59.569 }, 00:13:59.569 { 00:13:59.569 "name": "BaseBdev3", 00:13:59.569 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:13:59.569 "is_configured": true, 00:13:59.569 "data_offset": 0, 00:13:59.569 "data_size": 65536 00:13:59.569 } 00:13:59.569 ] 00:13:59.569 }' 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.569 22:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.139 [2024-11-26 22:58:39.032674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.139 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.399 [2024-11-26 22:58:39.296602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:00.399 /dev/nbd0 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.399 1+0 records in 00:14:00.399 1+0 records out 00:14:00.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453904 s, 9.0 MB/s 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:00.399 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:00.660 512+0 records in 00:14:00.660 512+0 records out 00:14:00.660 67108864 bytes (67 MB, 64 MiB) copied, 0.299484 s, 224 MB/s 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.660 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.920 [2024-11-26 22:58:39.890458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.920 [2024-11-26 22:58:39.902537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.920 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.920 "name": "raid_bdev1", 00:14:00.920 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:00.920 "strip_size_kb": 64, 00:14:00.920 "state": "online", 00:14:00.920 "raid_level": "raid5f", 00:14:00.920 "superblock": false, 00:14:00.920 "num_base_bdevs": 3, 00:14:00.920 "num_base_bdevs_discovered": 2, 00:14:00.920 "num_base_bdevs_operational": 2, 00:14:00.920 "base_bdevs_list": [ 00:14:00.920 { 00:14:00.920 "name": null, 00:14:00.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.920 "is_configured": false, 00:14:00.920 "data_offset": 0, 00:14:00.920 "data_size": 65536 00:14:00.920 }, 00:14:00.921 { 00:14:00.921 "name": "BaseBdev2", 00:14:00.921 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:00.921 "is_configured": true, 00:14:00.921 "data_offset": 0, 00:14:00.921 "data_size": 65536 00:14:00.921 }, 00:14:00.921 { 00:14:00.921 "name": "BaseBdev3", 00:14:00.921 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:00.921 "is_configured": true, 00:14:00.921 "data_offset": 0, 00:14:00.921 "data_size": 65536 00:14:00.921 } 00:14:00.921 ] 00:14:00.921 }' 00:14:00.921 22:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.921 22:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.491 22:58:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.491 22:58:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.491 22:58:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.491 [2024-11-26 22:58:40.346676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.491 [2024-11-26 22:58:40.351323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:14:01.491 22:58:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.491 22:58:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.491 [2024-11-26 22:58:40.353375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.430 "name": "raid_bdev1", 00:14:02.430 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:02.430 "strip_size_kb": 64, 00:14:02.430 "state": "online", 00:14:02.430 "raid_level": "raid5f", 00:14:02.430 "superblock": false, 00:14:02.430 "num_base_bdevs": 3, 00:14:02.430 "num_base_bdevs_discovered": 3, 00:14:02.430 "num_base_bdevs_operational": 3, 00:14:02.430 "process": { 00:14:02.430 "type": "rebuild", 00:14:02.430 "target": "spare", 00:14:02.430 "progress": { 00:14:02.430 "blocks": 20480, 00:14:02.430 "percent": 15 00:14:02.430 } 00:14:02.430 }, 00:14:02.430 "base_bdevs_list": [ 00:14:02.430 { 00:14:02.430 "name": "spare", 00:14:02.430 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:02.430 "is_configured": true, 00:14:02.430 "data_offset": 0, 00:14:02.430 "data_size": 65536 00:14:02.430 }, 00:14:02.430 { 00:14:02.430 "name": "BaseBdev2", 00:14:02.430 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:02.430 "is_configured": true, 00:14:02.430 "data_offset": 0, 00:14:02.430 "data_size": 65536 00:14:02.430 }, 00:14:02.430 { 00:14:02.430 "name": "BaseBdev3", 00:14:02.430 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:02.430 "is_configured": true, 00:14:02.430 "data_offset": 0, 00:14:02.430 "data_size": 65536 00:14:02.430 } 00:14:02.430 ] 00:14:02.430 }' 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.430 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.430 [2024-11-26 22:58:41.519661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.688 [2024-11-26 22:58:41.562382] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.688 [2024-11-26 22:58:41.562433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.688 [2024-11-26 22:58:41.562450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.688 [2024-11-26 22:58:41.562460] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.688 "name": "raid_bdev1", 00:14:02.688 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:02.688 "strip_size_kb": 64, 00:14:02.688 "state": "online", 00:14:02.688 "raid_level": "raid5f", 00:14:02.688 "superblock": false, 00:14:02.688 "num_base_bdevs": 3, 00:14:02.688 "num_base_bdevs_discovered": 2, 00:14:02.688 "num_base_bdevs_operational": 2, 00:14:02.688 "base_bdevs_list": [ 00:14:02.688 { 00:14:02.688 "name": null, 00:14:02.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.688 "is_configured": false, 00:14:02.688 "data_offset": 0, 00:14:02.688 "data_size": 65536 00:14:02.688 }, 00:14:02.688 { 00:14:02.688 "name": "BaseBdev2", 00:14:02.688 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:02.688 "is_configured": true, 00:14:02.688 "data_offset": 0, 00:14:02.688 "data_size": 65536 00:14:02.688 }, 00:14:02.688 { 00:14:02.688 "name": "BaseBdev3", 00:14:02.688 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:02.688 "is_configured": true, 00:14:02.688 "data_offset": 0, 00:14:02.688 "data_size": 65536 00:14:02.688 } 00:14:02.688 ] 00:14:02.688 }' 00:14:02.688 22:58:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.689 22:58:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.948 "name": "raid_bdev1", 00:14:02.948 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:02.948 "strip_size_kb": 64, 00:14:02.948 "state": "online", 00:14:02.948 "raid_level": "raid5f", 00:14:02.948 "superblock": false, 00:14:02.948 "num_base_bdevs": 3, 00:14:02.948 "num_base_bdevs_discovered": 2, 00:14:02.948 "num_base_bdevs_operational": 2, 00:14:02.948 "base_bdevs_list": [ 00:14:02.948 { 00:14:02.948 "name": null, 00:14:02.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.948 "is_configured": false, 00:14:02.948 "data_offset": 0, 00:14:02.948 "data_size": 65536 00:14:02.948 }, 00:14:02.948 { 00:14:02.948 "name": "BaseBdev2", 00:14:02.948 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:02.948 "is_configured": true, 00:14:02.948 "data_offset": 0, 00:14:02.948 "data_size": 65536 00:14:02.948 }, 00:14:02.948 { 00:14:02.948 "name": "BaseBdev3", 00:14:02.948 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:02.948 "is_configured": true, 00:14:02.948 "data_offset": 0, 00:14:02.948 "data_size": 65536 00:14:02.948 } 00:14:02.948 ] 00:14:02.948 }' 00:14:02.948 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.208 [2024-11-26 22:58:42.132510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.208 [2024-11-26 22:58:42.136167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:14:03.208 [2024-11-26 22:58:42.138178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.208 22:58:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.169 "name": "raid_bdev1", 00:14:04.169 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:04.169 "strip_size_kb": 64, 00:14:04.169 "state": "online", 00:14:04.169 "raid_level": "raid5f", 00:14:04.169 "superblock": false, 00:14:04.169 "num_base_bdevs": 3, 00:14:04.169 "num_base_bdevs_discovered": 3, 00:14:04.169 "num_base_bdevs_operational": 3, 00:14:04.169 "process": { 00:14:04.169 "type": "rebuild", 00:14:04.169 "target": "spare", 00:14:04.169 "progress": { 00:14:04.169 "blocks": 20480, 00:14:04.169 "percent": 15 00:14:04.169 } 00:14:04.169 }, 00:14:04.169 "base_bdevs_list": [ 00:14:04.169 { 00:14:04.169 "name": "spare", 00:14:04.169 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 }, 00:14:04.169 { 00:14:04.169 "name": "BaseBdev2", 00:14:04.169 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 }, 00:14:04.169 { 00:14:04.169 "name": "BaseBdev3", 00:14:04.169 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 } 00:14:04.169 ] 00:14:04.169 }' 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.169 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.449 "name": "raid_bdev1", 00:14:04.449 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:04.449 "strip_size_kb": 64, 00:14:04.449 "state": "online", 00:14:04.449 "raid_level": "raid5f", 00:14:04.449 "superblock": false, 00:14:04.449 "num_base_bdevs": 3, 00:14:04.449 "num_base_bdevs_discovered": 3, 00:14:04.449 "num_base_bdevs_operational": 3, 00:14:04.449 "process": { 00:14:04.449 "type": "rebuild", 00:14:04.449 "target": "spare", 00:14:04.449 "progress": { 00:14:04.449 "blocks": 22528, 00:14:04.449 "percent": 17 00:14:04.449 } 00:14:04.449 }, 00:14:04.449 "base_bdevs_list": [ 00:14:04.449 { 00:14:04.449 "name": "spare", 00:14:04.449 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:04.449 "is_configured": true, 00:14:04.449 "data_offset": 0, 00:14:04.449 "data_size": 65536 00:14:04.449 }, 00:14:04.449 { 00:14:04.449 "name": "BaseBdev2", 00:14:04.449 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:04.449 "is_configured": true, 00:14:04.449 "data_offset": 0, 00:14:04.449 "data_size": 65536 00:14:04.449 }, 00:14:04.449 { 00:14:04.449 "name": "BaseBdev3", 00:14:04.449 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:04.449 "is_configured": true, 00:14:04.449 "data_offset": 0, 00:14:04.449 "data_size": 65536 00:14:04.449 } 00:14:04.449 ] 00:14:04.449 }' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.449 22:58:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.404 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.404 "name": "raid_bdev1", 00:14:05.404 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:05.404 "strip_size_kb": 64, 00:14:05.404 "state": "online", 00:14:05.404 "raid_level": "raid5f", 00:14:05.404 "superblock": false, 00:14:05.404 "num_base_bdevs": 3, 00:14:05.404 "num_base_bdevs_discovered": 3, 00:14:05.404 "num_base_bdevs_operational": 3, 00:14:05.404 "process": { 00:14:05.404 "type": "rebuild", 00:14:05.404 "target": "spare", 00:14:05.404 "progress": { 00:14:05.404 "blocks": 47104, 00:14:05.404 "percent": 35 00:14:05.404 } 00:14:05.404 }, 00:14:05.404 "base_bdevs_list": [ 00:14:05.404 { 00:14:05.404 "name": "spare", 00:14:05.404 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:05.404 "is_configured": true, 00:14:05.404 "data_offset": 0, 00:14:05.404 "data_size": 65536 00:14:05.404 }, 00:14:05.404 { 00:14:05.404 "name": "BaseBdev2", 00:14:05.404 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:05.404 "is_configured": true, 00:14:05.404 "data_offset": 0, 00:14:05.404 "data_size": 65536 00:14:05.404 }, 00:14:05.404 { 00:14:05.404 "name": "BaseBdev3", 00:14:05.405 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:05.405 "is_configured": true, 00:14:05.405 "data_offset": 0, 00:14:05.405 "data_size": 65536 00:14:05.405 } 00:14:05.405 ] 00:14:05.405 }' 00:14:05.405 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.665 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.665 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.665 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.665 22:58:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 22:58:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.605 "name": "raid_bdev1", 00:14:06.605 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:06.605 "strip_size_kb": 64, 00:14:06.605 "state": "online", 00:14:06.605 "raid_level": "raid5f", 00:14:06.605 "superblock": false, 00:14:06.605 "num_base_bdevs": 3, 00:14:06.605 "num_base_bdevs_discovered": 3, 00:14:06.605 "num_base_bdevs_operational": 3, 00:14:06.605 "process": { 00:14:06.605 "type": "rebuild", 00:14:06.605 "target": "spare", 00:14:06.605 "progress": { 00:14:06.605 "blocks": 69632, 00:14:06.605 "percent": 53 00:14:06.605 } 00:14:06.605 }, 00:14:06.605 "base_bdevs_list": [ 00:14:06.605 { 00:14:06.605 "name": "spare", 00:14:06.605 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:06.605 "is_configured": true, 00:14:06.605 "data_offset": 0, 00:14:06.605 "data_size": 65536 00:14:06.605 }, 00:14:06.605 { 00:14:06.605 "name": "BaseBdev2", 00:14:06.605 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:06.605 "is_configured": true, 00:14:06.605 "data_offset": 0, 00:14:06.605 "data_size": 65536 00:14:06.605 }, 00:14:06.605 { 00:14:06.605 "name": "BaseBdev3", 00:14:06.605 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:06.605 "is_configured": true, 00:14:06.605 "data_offset": 0, 00:14:06.605 "data_size": 65536 00:14:06.605 } 00:14:06.605 ] 00:14:06.605 }' 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.605 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.864 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.864 22:58:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.804 "name": "raid_bdev1", 00:14:07.804 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:07.804 "strip_size_kb": 64, 00:14:07.804 "state": "online", 00:14:07.804 "raid_level": "raid5f", 00:14:07.804 "superblock": false, 00:14:07.804 "num_base_bdevs": 3, 00:14:07.804 "num_base_bdevs_discovered": 3, 00:14:07.804 "num_base_bdevs_operational": 3, 00:14:07.804 "process": { 00:14:07.804 "type": "rebuild", 00:14:07.804 "target": "spare", 00:14:07.804 "progress": { 00:14:07.804 "blocks": 94208, 00:14:07.804 "percent": 71 00:14:07.804 } 00:14:07.804 }, 00:14:07.804 "base_bdevs_list": [ 00:14:07.804 { 00:14:07.804 "name": "spare", 00:14:07.804 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:07.804 "is_configured": true, 00:14:07.804 "data_offset": 0, 00:14:07.804 "data_size": 65536 00:14:07.804 }, 00:14:07.804 { 00:14:07.804 "name": "BaseBdev2", 00:14:07.804 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:07.804 "is_configured": true, 00:14:07.804 "data_offset": 0, 00:14:07.804 "data_size": 65536 00:14:07.804 }, 00:14:07.804 { 00:14:07.804 "name": "BaseBdev3", 00:14:07.804 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:07.804 "is_configured": true, 00:14:07.804 "data_offset": 0, 00:14:07.804 "data_size": 65536 00:14:07.804 } 00:14:07.804 ] 00:14:07.804 }' 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.804 22:58:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.185 "name": "raid_bdev1", 00:14:09.185 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:09.185 "strip_size_kb": 64, 00:14:09.185 "state": "online", 00:14:09.185 "raid_level": "raid5f", 00:14:09.185 "superblock": false, 00:14:09.185 "num_base_bdevs": 3, 00:14:09.185 "num_base_bdevs_discovered": 3, 00:14:09.185 "num_base_bdevs_operational": 3, 00:14:09.185 "process": { 00:14:09.185 "type": "rebuild", 00:14:09.185 "target": "spare", 00:14:09.185 "progress": { 00:14:09.185 "blocks": 116736, 00:14:09.185 "percent": 89 00:14:09.185 } 00:14:09.185 }, 00:14:09.185 "base_bdevs_list": [ 00:14:09.185 { 00:14:09.185 "name": "spare", 00:14:09.185 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:09.185 "is_configured": true, 00:14:09.185 "data_offset": 0, 00:14:09.185 "data_size": 65536 00:14:09.185 }, 00:14:09.185 { 00:14:09.185 "name": "BaseBdev2", 00:14:09.185 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:09.185 "is_configured": true, 00:14:09.185 "data_offset": 0, 00:14:09.185 "data_size": 65536 00:14:09.185 }, 00:14:09.185 { 00:14:09.185 "name": "BaseBdev3", 00:14:09.185 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:09.185 "is_configured": true, 00:14:09.185 "data_offset": 0, 00:14:09.185 "data_size": 65536 00:14:09.185 } 00:14:09.185 ] 00:14:09.185 }' 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.185 22:58:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.185 22:58:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.185 22:58:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.756 [2024-11-26 22:58:48.583023] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.756 [2024-11-26 22:58:48.583100] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.756 [2024-11-26 22:58:48.583139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.016 "name": "raid_bdev1", 00:14:10.016 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:10.016 "strip_size_kb": 64, 00:14:10.016 "state": "online", 00:14:10.016 "raid_level": "raid5f", 00:14:10.016 "superblock": false, 00:14:10.016 "num_base_bdevs": 3, 00:14:10.016 "num_base_bdevs_discovered": 3, 00:14:10.016 "num_base_bdevs_operational": 3, 00:14:10.016 "base_bdevs_list": [ 00:14:10.016 { 00:14:10.016 "name": "spare", 00:14:10.016 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:10.016 "is_configured": true, 00:14:10.016 "data_offset": 0, 00:14:10.016 "data_size": 65536 00:14:10.016 }, 00:14:10.016 { 00:14:10.016 "name": "BaseBdev2", 00:14:10.016 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:10.016 "is_configured": true, 00:14:10.016 "data_offset": 0, 00:14:10.016 "data_size": 65536 00:14:10.016 }, 00:14:10.016 { 00:14:10.016 "name": "BaseBdev3", 00:14:10.016 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:10.016 "is_configured": true, 00:14:10.016 "data_offset": 0, 00:14:10.016 "data_size": 65536 00:14:10.016 } 00:14:10.016 ] 00:14:10.016 }' 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.016 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.285 "name": "raid_bdev1", 00:14:10.285 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:10.285 "strip_size_kb": 64, 00:14:10.285 "state": "online", 00:14:10.285 "raid_level": "raid5f", 00:14:10.285 "superblock": false, 00:14:10.285 "num_base_bdevs": 3, 00:14:10.285 "num_base_bdevs_discovered": 3, 00:14:10.285 "num_base_bdevs_operational": 3, 00:14:10.285 "base_bdevs_list": [ 00:14:10.285 { 00:14:10.285 "name": "spare", 00:14:10.285 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:10.285 "is_configured": true, 00:14:10.285 "data_offset": 0, 00:14:10.285 "data_size": 65536 00:14:10.285 }, 00:14:10.285 { 00:14:10.285 "name": "BaseBdev2", 00:14:10.285 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:10.285 "is_configured": true, 00:14:10.285 "data_offset": 0, 00:14:10.285 "data_size": 65536 00:14:10.285 }, 00:14:10.285 { 00:14:10.285 "name": "BaseBdev3", 00:14:10.285 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:10.285 "is_configured": true, 00:14:10.285 "data_offset": 0, 00:14:10.285 "data_size": 65536 00:14:10.285 } 00:14:10.285 ] 00:14:10.285 }' 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.285 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.286 "name": "raid_bdev1", 00:14:10.286 "uuid": "8ab0613b-d726-49a0-9f48-f432b9ba4193", 00:14:10.286 "strip_size_kb": 64, 00:14:10.286 "state": "online", 00:14:10.286 "raid_level": "raid5f", 00:14:10.286 "superblock": false, 00:14:10.286 "num_base_bdevs": 3, 00:14:10.286 "num_base_bdevs_discovered": 3, 00:14:10.286 "num_base_bdevs_operational": 3, 00:14:10.286 "base_bdevs_list": [ 00:14:10.286 { 00:14:10.286 "name": "spare", 00:14:10.286 "uuid": "dafe1032-5bf5-5938-90ed-5644971ee1b9", 00:14:10.286 "is_configured": true, 00:14:10.286 "data_offset": 0, 00:14:10.286 "data_size": 65536 00:14:10.286 }, 00:14:10.286 { 00:14:10.286 "name": "BaseBdev2", 00:14:10.286 "uuid": "b16a0ccc-a0c9-51b0-b64e-a776c6a67c73", 00:14:10.286 "is_configured": true, 00:14:10.286 "data_offset": 0, 00:14:10.286 "data_size": 65536 00:14:10.286 }, 00:14:10.286 { 00:14:10.286 "name": "BaseBdev3", 00:14:10.286 "uuid": "e1904659-aa2d-5530-9147-9e9f5aecb2e4", 00:14:10.286 "is_configured": true, 00:14:10.286 "data_offset": 0, 00:14:10.286 "data_size": 65536 00:14:10.286 } 00:14:10.286 ] 00:14:10.286 }' 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.286 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.856 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 [2024-11-26 22:58:49.696821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.857 [2024-11-26 22:58:49.696855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.857 [2024-11-26 22:58:49.696929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.857 [2024-11-26 22:58:49.697000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.857 [2024-11-26 22:58:49.697020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:10.857 /dev/nbd0 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.857 1+0 records in 00:14:10.857 1+0 records out 00:14:10.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392427 s, 10.4 MB/s 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.857 22:58:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.117 /dev/nbd1 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.117 1+0 records in 00:14:11.117 1+0 records out 00:14:11.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004017 s, 10.2 MB/s 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:11.117 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.376 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.376 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:11.376 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.376 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.376 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.377 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93729 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 93729 ']' 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 93729 00:14:11.636 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93729 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93729' 00:14:11.897 killing process with pid 93729 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 93729 00:14:11.897 Received shutdown signal, test time was about 60.000000 seconds 00:14:11.897 00:14:11.897 Latency(us) 00:14:11.897 [2024-11-26T22:58:51.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.897 [2024-11-26T22:58:51.025Z] =================================================================================================================== 00:14:11.897 [2024-11-26T22:58:51.025Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.897 [2024-11-26 22:58:50.802649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.897 22:58:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 93729 00:14:11.897 [2024-11-26 22:58:50.842622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.158 22:58:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.158 00:14:12.158 real 0m13.521s 00:14:12.159 user 0m16.794s 00:14:12.159 sys 0m2.054s 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.159 ************************************ 00:14:12.159 END TEST raid5f_rebuild_test 00:14:12.159 ************************************ 00:14:12.159 22:58:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:12.159 22:58:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.159 22:58:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.159 22:58:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.159 ************************************ 00:14:12.159 START TEST raid5f_rebuild_test_sb 00:14:12.159 ************************************ 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94152 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94152 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94152 ']' 00:14:12.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.159 22:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.159 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.159 Zero copy mechanism will not be used. 00:14:12.159 [2024-11-26 22:58:51.238210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:12.159 [2024-11-26 22:58:51.238377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94152 ] 00:14:12.420 [2024-11-26 22:58:51.378776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.420 [2024-11-26 22:58:51.413329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.420 [2024-11-26 22:58:51.439742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.420 [2024-11-26 22:58:51.482612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.420 [2024-11-26 22:58:51.482729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 BaseBdev1_malloc 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 [2024-11-26 22:58:52.071524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.990 [2024-11-26 22:58:52.071675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.990 [2024-11-26 22:58:52.071730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.990 [2024-11-26 22:58:52.071777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.990 [2024-11-26 22:58:52.073898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.990 [2024-11-26 22:58:52.073971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.990 BaseBdev1 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 BaseBdev2_malloc 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 [2024-11-26 22:58:52.099924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.990 [2024-11-26 22:58:52.099979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.990 [2024-11-26 22:58:52.099995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.990 [2024-11-26 22:58:52.100005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.990 [2024-11-26 22:58:52.101925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.990 [2024-11-26 22:58:52.101963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.990 BaseBdev2 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 BaseBdev3_malloc 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 [2024-11-26 22:58:52.128778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:13.250 [2024-11-26 22:58:52.128832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.250 [2024-11-26 22:58:52.128850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.250 [2024-11-26 22:58:52.128860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.250 [2024-11-26 22:58:52.130809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.250 [2024-11-26 22:58:52.130850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.250 BaseBdev3 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 spare_malloc 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 spare_delay 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 [2024-11-26 22:58:52.187721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.250 [2024-11-26 22:58:52.187784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.250 [2024-11-26 22:58:52.187804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:13.250 [2024-11-26 22:58:52.187816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.250 [2024-11-26 22:58:52.190208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.250 [2024-11-26 22:58:52.190276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.250 spare 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 [2024-11-26 22:58:52.199763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.250 [2024-11-26 22:58:52.201495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.251 [2024-11-26 22:58:52.201637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.251 [2024-11-26 22:58:52.201786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:13.251 [2024-11-26 22:58:52.201798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.251 [2024-11-26 22:58:52.202027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:13.251 [2024-11-26 22:58:52.202416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:13.251 [2024-11-26 22:58:52.202436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:13.251 [2024-11-26 22:58:52.202543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.251 "name": "raid_bdev1", 00:14:13.251 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:13.251 "strip_size_kb": 64, 00:14:13.251 "state": "online", 00:14:13.251 "raid_level": "raid5f", 00:14:13.251 "superblock": true, 00:14:13.251 "num_base_bdevs": 3, 00:14:13.251 "num_base_bdevs_discovered": 3, 00:14:13.251 "num_base_bdevs_operational": 3, 00:14:13.251 "base_bdevs_list": [ 00:14:13.251 { 00:14:13.251 "name": "BaseBdev1", 00:14:13.251 "uuid": "d86f6498-1942-54bb-98b5-cc9837274915", 00:14:13.251 "is_configured": true, 00:14:13.251 "data_offset": 2048, 00:14:13.251 "data_size": 63488 00:14:13.251 }, 00:14:13.251 { 00:14:13.251 "name": "BaseBdev2", 00:14:13.251 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:13.251 "is_configured": true, 00:14:13.251 "data_offset": 2048, 00:14:13.251 "data_size": 63488 00:14:13.251 }, 00:14:13.251 { 00:14:13.251 "name": "BaseBdev3", 00:14:13.251 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:13.251 "is_configured": true, 00:14:13.251 "data_offset": 2048, 00:14:13.251 "data_size": 63488 00:14:13.251 } 00:14:13.251 ] 00:14:13.251 }' 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.251 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.822 [2024-11-26 22:58:52.676275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.822 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.822 [2024-11-26 22:58:52.932167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:14.083 /dev/nbd0 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.083 22:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.083 1+0 records in 00:14:14.083 1+0 records out 00:14:14.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431422 s, 9.5 MB/s 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:14.083 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:14.344 496+0 records in 00:14:14.344 496+0 records out 00:14:14.344 65011712 bytes (65 MB, 62 MiB) copied, 0.298922 s, 217 MB/s 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.344 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.605 [2024-11-26 22:58:53.520311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.605 [2024-11-26 22:58:53.540405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.605 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.605 "name": "raid_bdev1", 00:14:14.605 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:14.605 "strip_size_kb": 64, 00:14:14.605 "state": "online", 00:14:14.605 "raid_level": "raid5f", 00:14:14.605 "superblock": true, 00:14:14.605 "num_base_bdevs": 3, 00:14:14.605 "num_base_bdevs_discovered": 2, 00:14:14.605 "num_base_bdevs_operational": 2, 00:14:14.605 "base_bdevs_list": [ 00:14:14.605 { 00:14:14.605 "name": null, 00:14:14.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.605 "is_configured": false, 00:14:14.605 "data_offset": 0, 00:14:14.605 "data_size": 63488 00:14:14.605 }, 00:14:14.605 { 00:14:14.605 "name": "BaseBdev2", 00:14:14.605 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:14.605 "is_configured": true, 00:14:14.605 "data_offset": 2048, 00:14:14.605 "data_size": 63488 00:14:14.605 }, 00:14:14.605 { 00:14:14.606 "name": "BaseBdev3", 00:14:14.606 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:14.606 "is_configured": true, 00:14:14.606 "data_offset": 2048, 00:14:14.606 "data_size": 63488 00:14:14.606 } 00:14:14.606 ] 00:14:14.606 }' 00:14:14.606 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.606 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.177 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.177 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.177 22:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.177 [2024-11-26 22:58:54.004525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.177 [2024-11-26 22:58:54.008962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:14:15.177 22:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.178 22:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:15.178 [2024-11-26 22:58:54.010981] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.118 "name": "raid_bdev1", 00:14:16.118 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:16.118 "strip_size_kb": 64, 00:14:16.118 "state": "online", 00:14:16.118 "raid_level": "raid5f", 00:14:16.118 "superblock": true, 00:14:16.118 "num_base_bdevs": 3, 00:14:16.118 "num_base_bdevs_discovered": 3, 00:14:16.118 "num_base_bdevs_operational": 3, 00:14:16.118 "process": { 00:14:16.118 "type": "rebuild", 00:14:16.118 "target": "spare", 00:14:16.118 "progress": { 00:14:16.118 "blocks": 20480, 00:14:16.118 "percent": 16 00:14:16.118 } 00:14:16.118 }, 00:14:16.118 "base_bdevs_list": [ 00:14:16.118 { 00:14:16.118 "name": "spare", 00:14:16.118 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:16.118 "is_configured": true, 00:14:16.118 "data_offset": 2048, 00:14:16.118 "data_size": 63488 00:14:16.118 }, 00:14:16.118 { 00:14:16.118 "name": "BaseBdev2", 00:14:16.118 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:16.118 "is_configured": true, 00:14:16.118 "data_offset": 2048, 00:14:16.118 "data_size": 63488 00:14:16.118 }, 00:14:16.118 { 00:14:16.118 "name": "BaseBdev3", 00:14:16.118 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:16.118 "is_configured": true, 00:14:16.118 "data_offset": 2048, 00:14:16.118 "data_size": 63488 00:14:16.118 } 00:14:16.118 ] 00:14:16.118 }' 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 [2024-11-26 22:58:55.157998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.118 [2024-11-26 22:58:55.219989] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.118 [2024-11-26 22:58:55.220041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.118 [2024-11-26 22:58:55.220058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.118 [2024-11-26 22:58:55.220065] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.118 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.378 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.378 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.378 "name": "raid_bdev1", 00:14:16.378 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:16.378 "strip_size_kb": 64, 00:14:16.378 "state": "online", 00:14:16.378 "raid_level": "raid5f", 00:14:16.378 "superblock": true, 00:14:16.378 "num_base_bdevs": 3, 00:14:16.378 "num_base_bdevs_discovered": 2, 00:14:16.378 "num_base_bdevs_operational": 2, 00:14:16.378 "base_bdevs_list": [ 00:14:16.378 { 00:14:16.378 "name": null, 00:14:16.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.378 "is_configured": false, 00:14:16.378 "data_offset": 0, 00:14:16.378 "data_size": 63488 00:14:16.378 }, 00:14:16.378 { 00:14:16.378 "name": "BaseBdev2", 00:14:16.378 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:16.378 "is_configured": true, 00:14:16.378 "data_offset": 2048, 00:14:16.378 "data_size": 63488 00:14:16.378 }, 00:14:16.378 { 00:14:16.378 "name": "BaseBdev3", 00:14:16.378 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:16.378 "is_configured": true, 00:14:16.378 "data_offset": 2048, 00:14:16.378 "data_size": 63488 00:14:16.378 } 00:14:16.378 ] 00:14:16.378 }' 00:14:16.378 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.378 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.638 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.638 "name": "raid_bdev1", 00:14:16.638 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:16.638 "strip_size_kb": 64, 00:14:16.638 "state": "online", 00:14:16.638 "raid_level": "raid5f", 00:14:16.638 "superblock": true, 00:14:16.638 "num_base_bdevs": 3, 00:14:16.638 "num_base_bdevs_discovered": 2, 00:14:16.638 "num_base_bdevs_operational": 2, 00:14:16.638 "base_bdevs_list": [ 00:14:16.638 { 00:14:16.638 "name": null, 00:14:16.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.638 "is_configured": false, 00:14:16.638 "data_offset": 0, 00:14:16.638 "data_size": 63488 00:14:16.638 }, 00:14:16.638 { 00:14:16.638 "name": "BaseBdev2", 00:14:16.638 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:16.638 "is_configured": true, 00:14:16.638 "data_offset": 2048, 00:14:16.638 "data_size": 63488 00:14:16.638 }, 00:14:16.638 { 00:14:16.639 "name": "BaseBdev3", 00:14:16.639 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:16.639 "is_configured": true, 00:14:16.639 "data_offset": 2048, 00:14:16.639 "data_size": 63488 00:14:16.639 } 00:14:16.639 ] 00:14:16.639 }' 00:14:16.639 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.639 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.639 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.899 [2024-11-26 22:58:55.810026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.899 [2024-11-26 22:58:55.814029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.899 22:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.899 [2024-11-26 22:58:55.816092] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.838 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.839 "name": "raid_bdev1", 00:14:17.839 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:17.839 "strip_size_kb": 64, 00:14:17.839 "state": "online", 00:14:17.839 "raid_level": "raid5f", 00:14:17.839 "superblock": true, 00:14:17.839 "num_base_bdevs": 3, 00:14:17.839 "num_base_bdevs_discovered": 3, 00:14:17.839 "num_base_bdevs_operational": 3, 00:14:17.839 "process": { 00:14:17.839 "type": "rebuild", 00:14:17.839 "target": "spare", 00:14:17.839 "progress": { 00:14:17.839 "blocks": 20480, 00:14:17.839 "percent": 16 00:14:17.839 } 00:14:17.839 }, 00:14:17.839 "base_bdevs_list": [ 00:14:17.839 { 00:14:17.839 "name": "spare", 00:14:17.839 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:17.839 "is_configured": true, 00:14:17.839 "data_offset": 2048, 00:14:17.839 "data_size": 63488 00:14:17.839 }, 00:14:17.839 { 00:14:17.839 "name": "BaseBdev2", 00:14:17.839 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:17.839 "is_configured": true, 00:14:17.839 "data_offset": 2048, 00:14:17.839 "data_size": 63488 00:14:17.839 }, 00:14:17.839 { 00:14:17.839 "name": "BaseBdev3", 00:14:17.839 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:17.839 "is_configured": true, 00:14:17.839 "data_offset": 2048, 00:14:17.839 "data_size": 63488 00:14:17.839 } 00:14:17.839 ] 00:14:17.839 }' 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.839 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:18.099 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=465 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 22:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.099 "name": "raid_bdev1", 00:14:18.099 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:18.099 "strip_size_kb": 64, 00:14:18.099 "state": "online", 00:14:18.099 "raid_level": "raid5f", 00:14:18.099 "superblock": true, 00:14:18.099 "num_base_bdevs": 3, 00:14:18.099 "num_base_bdevs_discovered": 3, 00:14:18.099 "num_base_bdevs_operational": 3, 00:14:18.099 "process": { 00:14:18.099 "type": "rebuild", 00:14:18.099 "target": "spare", 00:14:18.099 "progress": { 00:14:18.099 "blocks": 22528, 00:14:18.099 "percent": 17 00:14:18.099 } 00:14:18.099 }, 00:14:18.099 "base_bdevs_list": [ 00:14:18.099 { 00:14:18.099 "name": "spare", 00:14:18.099 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:18.099 "is_configured": true, 00:14:18.099 "data_offset": 2048, 00:14:18.099 "data_size": 63488 00:14:18.099 }, 00:14:18.099 { 00:14:18.099 "name": "BaseBdev2", 00:14:18.099 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:18.099 "is_configured": true, 00:14:18.099 "data_offset": 2048, 00:14:18.099 "data_size": 63488 00:14:18.099 }, 00:14:18.099 { 00:14:18.099 "name": "BaseBdev3", 00:14:18.099 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:18.099 "is_configured": true, 00:14:18.099 "data_offset": 2048, 00:14:18.099 "data_size": 63488 00:14:18.099 } 00:14:18.099 ] 00:14:18.099 }' 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.099 22:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.038 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.297 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.297 "name": "raid_bdev1", 00:14:19.297 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:19.297 "strip_size_kb": 64, 00:14:19.297 "state": "online", 00:14:19.297 "raid_level": "raid5f", 00:14:19.297 "superblock": true, 00:14:19.297 "num_base_bdevs": 3, 00:14:19.297 "num_base_bdevs_discovered": 3, 00:14:19.297 "num_base_bdevs_operational": 3, 00:14:19.297 "process": { 00:14:19.297 "type": "rebuild", 00:14:19.297 "target": "spare", 00:14:19.297 "progress": { 00:14:19.297 "blocks": 45056, 00:14:19.297 "percent": 35 00:14:19.297 } 00:14:19.297 }, 00:14:19.297 "base_bdevs_list": [ 00:14:19.297 { 00:14:19.297 "name": "spare", 00:14:19.297 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:19.297 "is_configured": true, 00:14:19.297 "data_offset": 2048, 00:14:19.297 "data_size": 63488 00:14:19.297 }, 00:14:19.297 { 00:14:19.297 "name": "BaseBdev2", 00:14:19.297 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:19.297 "is_configured": true, 00:14:19.297 "data_offset": 2048, 00:14:19.297 "data_size": 63488 00:14:19.297 }, 00:14:19.297 { 00:14:19.297 "name": "BaseBdev3", 00:14:19.297 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:19.297 "is_configured": true, 00:14:19.297 "data_offset": 2048, 00:14:19.297 "data_size": 63488 00:14:19.297 } 00:14:19.297 ] 00:14:19.297 }' 00:14:19.297 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.297 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.297 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.298 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.298 22:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.238 "name": "raid_bdev1", 00:14:20.238 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:20.238 "strip_size_kb": 64, 00:14:20.238 "state": "online", 00:14:20.238 "raid_level": "raid5f", 00:14:20.238 "superblock": true, 00:14:20.238 "num_base_bdevs": 3, 00:14:20.238 "num_base_bdevs_discovered": 3, 00:14:20.238 "num_base_bdevs_operational": 3, 00:14:20.238 "process": { 00:14:20.238 "type": "rebuild", 00:14:20.238 "target": "spare", 00:14:20.238 "progress": { 00:14:20.238 "blocks": 69632, 00:14:20.238 "percent": 54 00:14:20.238 } 00:14:20.238 }, 00:14:20.238 "base_bdevs_list": [ 00:14:20.238 { 00:14:20.238 "name": "spare", 00:14:20.238 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:20.238 "is_configured": true, 00:14:20.238 "data_offset": 2048, 00:14:20.238 "data_size": 63488 00:14:20.238 }, 00:14:20.238 { 00:14:20.238 "name": "BaseBdev2", 00:14:20.238 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:20.238 "is_configured": true, 00:14:20.238 "data_offset": 2048, 00:14:20.238 "data_size": 63488 00:14:20.238 }, 00:14:20.238 { 00:14:20.238 "name": "BaseBdev3", 00:14:20.238 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:20.238 "is_configured": true, 00:14:20.238 "data_offset": 2048, 00:14:20.238 "data_size": 63488 00:14:20.238 } 00:14:20.238 ] 00:14:20.238 }' 00:14:20.238 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.497 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.497 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.497 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.497 22:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.435 "name": "raid_bdev1", 00:14:21.435 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:21.435 "strip_size_kb": 64, 00:14:21.435 "state": "online", 00:14:21.435 "raid_level": "raid5f", 00:14:21.435 "superblock": true, 00:14:21.435 "num_base_bdevs": 3, 00:14:21.435 "num_base_bdevs_discovered": 3, 00:14:21.435 "num_base_bdevs_operational": 3, 00:14:21.435 "process": { 00:14:21.435 "type": "rebuild", 00:14:21.435 "target": "spare", 00:14:21.435 "progress": { 00:14:21.435 "blocks": 92160, 00:14:21.435 "percent": 72 00:14:21.435 } 00:14:21.435 }, 00:14:21.435 "base_bdevs_list": [ 00:14:21.435 { 00:14:21.435 "name": "spare", 00:14:21.435 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:21.435 "is_configured": true, 00:14:21.435 "data_offset": 2048, 00:14:21.435 "data_size": 63488 00:14:21.435 }, 00:14:21.435 { 00:14:21.435 "name": "BaseBdev2", 00:14:21.435 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:21.435 "is_configured": true, 00:14:21.435 "data_offset": 2048, 00:14:21.435 "data_size": 63488 00:14:21.435 }, 00:14:21.435 { 00:14:21.435 "name": "BaseBdev3", 00:14:21.435 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:21.435 "is_configured": true, 00:14:21.435 "data_offset": 2048, 00:14:21.435 "data_size": 63488 00:14:21.435 } 00:14:21.435 ] 00:14:21.435 }' 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.435 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.695 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.695 22:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.636 "name": "raid_bdev1", 00:14:22.636 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:22.636 "strip_size_kb": 64, 00:14:22.636 "state": "online", 00:14:22.636 "raid_level": "raid5f", 00:14:22.636 "superblock": true, 00:14:22.636 "num_base_bdevs": 3, 00:14:22.636 "num_base_bdevs_discovered": 3, 00:14:22.636 "num_base_bdevs_operational": 3, 00:14:22.636 "process": { 00:14:22.636 "type": "rebuild", 00:14:22.636 "target": "spare", 00:14:22.636 "progress": { 00:14:22.636 "blocks": 116736, 00:14:22.636 "percent": 91 00:14:22.636 } 00:14:22.636 }, 00:14:22.636 "base_bdevs_list": [ 00:14:22.636 { 00:14:22.636 "name": "spare", 00:14:22.636 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:22.636 "is_configured": true, 00:14:22.636 "data_offset": 2048, 00:14:22.636 "data_size": 63488 00:14:22.636 }, 00:14:22.636 { 00:14:22.636 "name": "BaseBdev2", 00:14:22.636 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:22.636 "is_configured": true, 00:14:22.636 "data_offset": 2048, 00:14:22.636 "data_size": 63488 00:14:22.636 }, 00:14:22.636 { 00:14:22.636 "name": "BaseBdev3", 00:14:22.636 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:22.636 "is_configured": true, 00:14:22.636 "data_offset": 2048, 00:14:22.636 "data_size": 63488 00:14:22.636 } 00:14:22.636 ] 00:14:22.636 }' 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.636 22:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.206 [2024-11-26 22:59:02.059573] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.206 [2024-11-26 22:59:02.059645] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.206 [2024-11-26 22:59:02.059737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.774 "name": "raid_bdev1", 00:14:23.774 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:23.774 "strip_size_kb": 64, 00:14:23.774 "state": "online", 00:14:23.774 "raid_level": "raid5f", 00:14:23.774 "superblock": true, 00:14:23.774 "num_base_bdevs": 3, 00:14:23.774 "num_base_bdevs_discovered": 3, 00:14:23.774 "num_base_bdevs_operational": 3, 00:14:23.774 "base_bdevs_list": [ 00:14:23.774 { 00:14:23.774 "name": "spare", 00:14:23.774 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:23.774 "is_configured": true, 00:14:23.774 "data_offset": 2048, 00:14:23.774 "data_size": 63488 00:14:23.774 }, 00:14:23.774 { 00:14:23.774 "name": "BaseBdev2", 00:14:23.774 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:23.774 "is_configured": true, 00:14:23.774 "data_offset": 2048, 00:14:23.774 "data_size": 63488 00:14:23.774 }, 00:14:23.774 { 00:14:23.774 "name": "BaseBdev3", 00:14:23.774 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:23.774 "is_configured": true, 00:14:23.774 "data_offset": 2048, 00:14:23.774 "data_size": 63488 00:14:23.774 } 00:14:23.774 ] 00:14:23.774 }' 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.774 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.032 "name": "raid_bdev1", 00:14:24.032 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:24.032 "strip_size_kb": 64, 00:14:24.032 "state": "online", 00:14:24.032 "raid_level": "raid5f", 00:14:24.032 "superblock": true, 00:14:24.032 "num_base_bdevs": 3, 00:14:24.032 "num_base_bdevs_discovered": 3, 00:14:24.032 "num_base_bdevs_operational": 3, 00:14:24.032 "base_bdevs_list": [ 00:14:24.032 { 00:14:24.032 "name": "spare", 00:14:24.032 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:24.032 "is_configured": true, 00:14:24.032 "data_offset": 2048, 00:14:24.032 "data_size": 63488 00:14:24.032 }, 00:14:24.032 { 00:14:24.032 "name": "BaseBdev2", 00:14:24.032 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:24.032 "is_configured": true, 00:14:24.032 "data_offset": 2048, 00:14:24.032 "data_size": 63488 00:14:24.032 }, 00:14:24.032 { 00:14:24.032 "name": "BaseBdev3", 00:14:24.032 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:24.032 "is_configured": true, 00:14:24.032 "data_offset": 2048, 00:14:24.032 "data_size": 63488 00:14:24.032 } 00:14:24.032 ] 00:14:24.032 }' 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.032 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.033 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.033 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.033 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.033 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.033 22:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.033 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.033 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.033 "name": "raid_bdev1", 00:14:24.033 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:24.033 "strip_size_kb": 64, 00:14:24.033 "state": "online", 00:14:24.033 "raid_level": "raid5f", 00:14:24.033 "superblock": true, 00:14:24.033 "num_base_bdevs": 3, 00:14:24.033 "num_base_bdevs_discovered": 3, 00:14:24.033 "num_base_bdevs_operational": 3, 00:14:24.033 "base_bdevs_list": [ 00:14:24.033 { 00:14:24.033 "name": "spare", 00:14:24.033 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:24.033 "is_configured": true, 00:14:24.033 "data_offset": 2048, 00:14:24.033 "data_size": 63488 00:14:24.033 }, 00:14:24.033 { 00:14:24.033 "name": "BaseBdev2", 00:14:24.033 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:24.033 "is_configured": true, 00:14:24.033 "data_offset": 2048, 00:14:24.033 "data_size": 63488 00:14:24.033 }, 00:14:24.033 { 00:14:24.033 "name": "BaseBdev3", 00:14:24.033 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:24.033 "is_configured": true, 00:14:24.033 "data_offset": 2048, 00:14:24.033 "data_size": 63488 00:14:24.033 } 00:14:24.033 ] 00:14:24.033 }' 00:14:24.033 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.033 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 [2024-11-26 22:59:03.469531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.602 [2024-11-26 22:59:03.469564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.602 [2024-11-26 22:59:03.469639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.602 [2024-11-26 22:59:03.469728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.602 [2024-11-26 22:59:03.469745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.602 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.602 /dev/nbd0 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.863 1+0 records in 00:14:24.863 1+0 records out 00:14:24.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380259 s, 10.8 MB/s 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.863 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:24.863 /dev/nbd1 00:14:25.124 22:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.124 1+0 records in 00:14:25.124 1+0 records out 00:14:25.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367607 s, 11.1 MB/s 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.124 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.384 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.384 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.384 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.385 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 [2024-11-26 22:59:04.552983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.645 [2024-11-26 22:59:04.553042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.645 [2024-11-26 22:59:04.553061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:25.645 [2024-11-26 22:59:04.553070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.645 [2024-11-26 22:59:04.555206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.645 [2024-11-26 22:59:04.555261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.645 [2024-11-26 22:59:04.555338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:25.645 [2024-11-26 22:59:04.555378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.645 [2024-11-26 22:59:04.555497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.645 [2024-11-26 22:59:04.555591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.645 spare 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 [2024-11-26 22:59:04.655646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:25.645 [2024-11-26 22:59:04.655679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:25.645 [2024-11-26 22:59:04.655914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:14:25.645 [2024-11-26 22:59:04.656314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:25.645 [2024-11-26 22:59:04.656332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:25.645 [2024-11-26 22:59:04.656446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.645 "name": "raid_bdev1", 00:14:25.645 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:25.645 "strip_size_kb": 64, 00:14:25.645 "state": "online", 00:14:25.645 "raid_level": "raid5f", 00:14:25.645 "superblock": true, 00:14:25.645 "num_base_bdevs": 3, 00:14:25.645 "num_base_bdevs_discovered": 3, 00:14:25.645 "num_base_bdevs_operational": 3, 00:14:25.645 "base_bdevs_list": [ 00:14:25.645 { 00:14:25.645 "name": "spare", 00:14:25.645 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:25.645 "is_configured": true, 00:14:25.645 "data_offset": 2048, 00:14:25.645 "data_size": 63488 00:14:25.645 }, 00:14:25.645 { 00:14:25.645 "name": "BaseBdev2", 00:14:25.645 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:25.645 "is_configured": true, 00:14:25.645 "data_offset": 2048, 00:14:25.645 "data_size": 63488 00:14:25.645 }, 00:14:25.645 { 00:14:25.645 "name": "BaseBdev3", 00:14:25.645 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:25.645 "is_configured": true, 00:14:25.645 "data_offset": 2048, 00:14:25.645 "data_size": 63488 00:14:25.645 } 00:14:25.645 ] 00:14:25.645 }' 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.645 22:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.215 "name": "raid_bdev1", 00:14:26.215 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:26.215 "strip_size_kb": 64, 00:14:26.215 "state": "online", 00:14:26.215 "raid_level": "raid5f", 00:14:26.215 "superblock": true, 00:14:26.215 "num_base_bdevs": 3, 00:14:26.215 "num_base_bdevs_discovered": 3, 00:14:26.215 "num_base_bdevs_operational": 3, 00:14:26.215 "base_bdevs_list": [ 00:14:26.215 { 00:14:26.215 "name": "spare", 00:14:26.215 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:26.215 "is_configured": true, 00:14:26.215 "data_offset": 2048, 00:14:26.215 "data_size": 63488 00:14:26.215 }, 00:14:26.215 { 00:14:26.215 "name": "BaseBdev2", 00:14:26.215 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:26.215 "is_configured": true, 00:14:26.215 "data_offset": 2048, 00:14:26.215 "data_size": 63488 00:14:26.215 }, 00:14:26.215 { 00:14:26.215 "name": "BaseBdev3", 00:14:26.215 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:26.215 "is_configured": true, 00:14:26.215 "data_offset": 2048, 00:14:26.215 "data_size": 63488 00:14:26.215 } 00:14:26.215 ] 00:14:26.215 }' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.215 [2024-11-26 22:59:05.309774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.215 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.475 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.475 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.475 "name": "raid_bdev1", 00:14:26.475 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:26.475 "strip_size_kb": 64, 00:14:26.475 "state": "online", 00:14:26.475 "raid_level": "raid5f", 00:14:26.475 "superblock": true, 00:14:26.475 "num_base_bdevs": 3, 00:14:26.475 "num_base_bdevs_discovered": 2, 00:14:26.475 "num_base_bdevs_operational": 2, 00:14:26.475 "base_bdevs_list": [ 00:14:26.475 { 00:14:26.475 "name": null, 00:14:26.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.475 "is_configured": false, 00:14:26.475 "data_offset": 0, 00:14:26.475 "data_size": 63488 00:14:26.475 }, 00:14:26.475 { 00:14:26.475 "name": "BaseBdev2", 00:14:26.475 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:26.475 "is_configured": true, 00:14:26.475 "data_offset": 2048, 00:14:26.475 "data_size": 63488 00:14:26.475 }, 00:14:26.475 { 00:14:26.475 "name": "BaseBdev3", 00:14:26.475 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:26.475 "is_configured": true, 00:14:26.475 "data_offset": 2048, 00:14:26.475 "data_size": 63488 00:14:26.475 } 00:14:26.475 ] 00:14:26.475 }' 00:14:26.475 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.475 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.735 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.735 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.735 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.735 [2024-11-26 22:59:05.729910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.735 [2024-11-26 22:59:05.730054] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.735 [2024-11-26 22:59:05.730083] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.735 [2024-11-26 22:59:05.730125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.735 [2024-11-26 22:59:05.734493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:14:26.735 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.735 22:59:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:26.735 [2024-11-26 22:59:05.736665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.674 "name": "raid_bdev1", 00:14:27.674 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:27.674 "strip_size_kb": 64, 00:14:27.674 "state": "online", 00:14:27.674 "raid_level": "raid5f", 00:14:27.674 "superblock": true, 00:14:27.674 "num_base_bdevs": 3, 00:14:27.674 "num_base_bdevs_discovered": 3, 00:14:27.674 "num_base_bdevs_operational": 3, 00:14:27.674 "process": { 00:14:27.674 "type": "rebuild", 00:14:27.674 "target": "spare", 00:14:27.674 "progress": { 00:14:27.674 "blocks": 20480, 00:14:27.674 "percent": 16 00:14:27.674 } 00:14:27.674 }, 00:14:27.674 "base_bdevs_list": [ 00:14:27.674 { 00:14:27.674 "name": "spare", 00:14:27.674 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:27.674 "is_configured": true, 00:14:27.674 "data_offset": 2048, 00:14:27.674 "data_size": 63488 00:14:27.674 }, 00:14:27.674 { 00:14:27.674 "name": "BaseBdev2", 00:14:27.674 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:27.674 "is_configured": true, 00:14:27.674 "data_offset": 2048, 00:14:27.674 "data_size": 63488 00:14:27.674 }, 00:14:27.674 { 00:14:27.674 "name": "BaseBdev3", 00:14:27.674 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:27.674 "is_configured": true, 00:14:27.674 "data_offset": 2048, 00:14:27.674 "data_size": 63488 00:14:27.674 } 00:14:27.674 ] 00:14:27.674 }' 00:14:27.674 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.934 [2024-11-26 22:59:06.898913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.934 [2024-11-26 22:59:06.945459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.934 [2024-11-26 22:59:06.945515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.934 [2024-11-26 22:59:06.945529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.934 [2024-11-26 22:59:06.945543] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.934 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.935 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.935 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.935 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.935 22:59:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.935 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.935 "name": "raid_bdev1", 00:14:27.935 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:27.935 "strip_size_kb": 64, 00:14:27.935 "state": "online", 00:14:27.935 "raid_level": "raid5f", 00:14:27.935 "superblock": true, 00:14:27.935 "num_base_bdevs": 3, 00:14:27.935 "num_base_bdevs_discovered": 2, 00:14:27.935 "num_base_bdevs_operational": 2, 00:14:27.935 "base_bdevs_list": [ 00:14:27.935 { 00:14:27.935 "name": null, 00:14:27.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.935 "is_configured": false, 00:14:27.935 "data_offset": 0, 00:14:27.935 "data_size": 63488 00:14:27.935 }, 00:14:27.935 { 00:14:27.935 "name": "BaseBdev2", 00:14:27.935 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:27.935 "is_configured": true, 00:14:27.935 "data_offset": 2048, 00:14:27.935 "data_size": 63488 00:14:27.935 }, 00:14:27.935 { 00:14:27.935 "name": "BaseBdev3", 00:14:27.935 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:27.935 "is_configured": true, 00:14:27.935 "data_offset": 2048, 00:14:27.935 "data_size": 63488 00:14:27.935 } 00:14:27.935 ] 00:14:27.935 }' 00:14:27.935 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.935 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.505 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.505 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.505 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.505 [2024-11-26 22:59:07.447077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.505 [2024-11-26 22:59:07.447135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.505 [2024-11-26 22:59:07.447157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:28.505 [2024-11-26 22:59:07.447169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.505 [2024-11-26 22:59:07.447598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.505 [2024-11-26 22:59:07.447628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.505 [2024-11-26 22:59:07.447699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:28.505 [2024-11-26 22:59:07.447712] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:28.505 [2024-11-26 22:59:07.447721] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.505 [2024-11-26 22:59:07.447746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.505 [2024-11-26 22:59:07.451329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:14:28.505 spare 00:14:28.505 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.505 22:59:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:28.505 [2024-11-26 22:59:07.453341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.469 "name": "raid_bdev1", 00:14:29.469 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:29.469 "strip_size_kb": 64, 00:14:29.469 "state": "online", 00:14:29.469 "raid_level": "raid5f", 00:14:29.469 "superblock": true, 00:14:29.469 "num_base_bdevs": 3, 00:14:29.469 "num_base_bdevs_discovered": 3, 00:14:29.469 "num_base_bdevs_operational": 3, 00:14:29.469 "process": { 00:14:29.469 "type": "rebuild", 00:14:29.469 "target": "spare", 00:14:29.469 "progress": { 00:14:29.469 "blocks": 20480, 00:14:29.469 "percent": 16 00:14:29.469 } 00:14:29.469 }, 00:14:29.469 "base_bdevs_list": [ 00:14:29.469 { 00:14:29.469 "name": "spare", 00:14:29.469 "uuid": "4e705d0d-7b9f-5405-84a0-5ed640d03190", 00:14:29.469 "is_configured": true, 00:14:29.469 "data_offset": 2048, 00:14:29.469 "data_size": 63488 00:14:29.469 }, 00:14:29.469 { 00:14:29.469 "name": "BaseBdev2", 00:14:29.469 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:29.469 "is_configured": true, 00:14:29.469 "data_offset": 2048, 00:14:29.469 "data_size": 63488 00:14:29.469 }, 00:14:29.469 { 00:14:29.469 "name": "BaseBdev3", 00:14:29.469 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:29.469 "is_configured": true, 00:14:29.469 "data_offset": 2048, 00:14:29.469 "data_size": 63488 00:14:29.469 } 00:14:29.469 ] 00:14:29.469 }' 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.469 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.757 [2024-11-26 22:59:08.608597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.757 [2024-11-26 22:59:08.661962] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.757 [2024-11-26 22:59:08.662033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.757 [2024-11-26 22:59:08.662053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.757 [2024-11-26 22:59:08.662060] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.757 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.758 "name": "raid_bdev1", 00:14:29.758 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:29.758 "strip_size_kb": 64, 00:14:29.758 "state": "online", 00:14:29.758 "raid_level": "raid5f", 00:14:29.758 "superblock": true, 00:14:29.758 "num_base_bdevs": 3, 00:14:29.758 "num_base_bdevs_discovered": 2, 00:14:29.758 "num_base_bdevs_operational": 2, 00:14:29.758 "base_bdevs_list": [ 00:14:29.758 { 00:14:29.758 "name": null, 00:14:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.758 "is_configured": false, 00:14:29.758 "data_offset": 0, 00:14:29.758 "data_size": 63488 00:14:29.758 }, 00:14:29.758 { 00:14:29.758 "name": "BaseBdev2", 00:14:29.758 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:29.758 "is_configured": true, 00:14:29.758 "data_offset": 2048, 00:14:29.758 "data_size": 63488 00:14:29.758 }, 00:14:29.758 { 00:14:29.758 "name": "BaseBdev3", 00:14:29.758 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:29.758 "is_configured": true, 00:14:29.758 "data_offset": 2048, 00:14:29.758 "data_size": 63488 00:14:29.758 } 00:14:29.758 ] 00:14:29.758 }' 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.758 22:59:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.341 "name": "raid_bdev1", 00:14:30.341 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:30.341 "strip_size_kb": 64, 00:14:30.341 "state": "online", 00:14:30.341 "raid_level": "raid5f", 00:14:30.341 "superblock": true, 00:14:30.341 "num_base_bdevs": 3, 00:14:30.341 "num_base_bdevs_discovered": 2, 00:14:30.341 "num_base_bdevs_operational": 2, 00:14:30.341 "base_bdevs_list": [ 00:14:30.341 { 00:14:30.341 "name": null, 00:14:30.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.341 "is_configured": false, 00:14:30.341 "data_offset": 0, 00:14:30.341 "data_size": 63488 00:14:30.341 }, 00:14:30.341 { 00:14:30.341 "name": "BaseBdev2", 00:14:30.341 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:30.341 "is_configured": true, 00:14:30.341 "data_offset": 2048, 00:14:30.341 "data_size": 63488 00:14:30.341 }, 00:14:30.341 { 00:14:30.341 "name": "BaseBdev3", 00:14:30.341 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:30.341 "is_configured": true, 00:14:30.341 "data_offset": 2048, 00:14:30.341 "data_size": 63488 00:14:30.341 } 00:14:30.341 ] 00:14:30.341 }' 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.341 [2024-11-26 22:59:09.327593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.341 [2024-11-26 22:59:09.327647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.341 [2024-11-26 22:59:09.327668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:30.341 [2024-11-26 22:59:09.327677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.341 [2024-11-26 22:59:09.328084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.341 [2024-11-26 22:59:09.328110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.341 [2024-11-26 22:59:09.328178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:30.341 [2024-11-26 22:59:09.328194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.341 [2024-11-26 22:59:09.328203] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.341 [2024-11-26 22:59:09.328220] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:30.341 BaseBdev1 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.341 22:59:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.281 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.281 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.282 "name": "raid_bdev1", 00:14:31.282 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:31.282 "strip_size_kb": 64, 00:14:31.282 "state": "online", 00:14:31.282 "raid_level": "raid5f", 00:14:31.282 "superblock": true, 00:14:31.282 "num_base_bdevs": 3, 00:14:31.282 "num_base_bdevs_discovered": 2, 00:14:31.282 "num_base_bdevs_operational": 2, 00:14:31.282 "base_bdevs_list": [ 00:14:31.282 { 00:14:31.282 "name": null, 00:14:31.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.282 "is_configured": false, 00:14:31.282 "data_offset": 0, 00:14:31.282 "data_size": 63488 00:14:31.282 }, 00:14:31.282 { 00:14:31.282 "name": "BaseBdev2", 00:14:31.282 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:31.282 "is_configured": true, 00:14:31.282 "data_offset": 2048, 00:14:31.282 "data_size": 63488 00:14:31.282 }, 00:14:31.282 { 00:14:31.282 "name": "BaseBdev3", 00:14:31.282 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:31.282 "is_configured": true, 00:14:31.282 "data_offset": 2048, 00:14:31.282 "data_size": 63488 00:14:31.282 } 00:14:31.282 ] 00:14:31.282 }' 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.282 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.852 "name": "raid_bdev1", 00:14:31.852 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:31.852 "strip_size_kb": 64, 00:14:31.852 "state": "online", 00:14:31.852 "raid_level": "raid5f", 00:14:31.852 "superblock": true, 00:14:31.852 "num_base_bdevs": 3, 00:14:31.852 "num_base_bdevs_discovered": 2, 00:14:31.852 "num_base_bdevs_operational": 2, 00:14:31.852 "base_bdevs_list": [ 00:14:31.852 { 00:14:31.852 "name": null, 00:14:31.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.852 "is_configured": false, 00:14:31.852 "data_offset": 0, 00:14:31.852 "data_size": 63488 00:14:31.852 }, 00:14:31.852 { 00:14:31.852 "name": "BaseBdev2", 00:14:31.852 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:31.852 "is_configured": true, 00:14:31.852 "data_offset": 2048, 00:14:31.852 "data_size": 63488 00:14:31.852 }, 00:14:31.852 { 00:14:31.852 "name": "BaseBdev3", 00:14:31.852 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:31.852 "is_configured": true, 00:14:31.852 "data_offset": 2048, 00:14:31.852 "data_size": 63488 00:14:31.852 } 00:14:31.852 ] 00:14:31.852 }' 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [2024-11-26 22:59:10.944100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.852 [2024-11-26 22:59:10.944260] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:31.852 [2024-11-26 22:59:10.944278] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.852 request: 00:14:31.852 { 00:14:31.852 "base_bdev": "BaseBdev1", 00:14:31.852 "raid_bdev": "raid_bdev1", 00:14:31.852 "method": "bdev_raid_add_base_bdev", 00:14:31.852 "req_id": 1 00:14:31.852 } 00:14:31.852 Got JSON-RPC error response 00:14:31.852 response: 00:14:31.852 { 00:14:31.852 "code": -22, 00:14:31.852 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:31.852 } 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.852 22:59:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.234 22:59:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.234 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.234 "name": "raid_bdev1", 00:14:33.234 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:33.234 "strip_size_kb": 64, 00:14:33.234 "state": "online", 00:14:33.234 "raid_level": "raid5f", 00:14:33.234 "superblock": true, 00:14:33.234 "num_base_bdevs": 3, 00:14:33.234 "num_base_bdevs_discovered": 2, 00:14:33.234 "num_base_bdevs_operational": 2, 00:14:33.234 "base_bdevs_list": [ 00:14:33.234 { 00:14:33.234 "name": null, 00:14:33.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.234 "is_configured": false, 00:14:33.234 "data_offset": 0, 00:14:33.234 "data_size": 63488 00:14:33.234 }, 00:14:33.234 { 00:14:33.234 "name": "BaseBdev2", 00:14:33.234 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:33.234 "is_configured": true, 00:14:33.234 "data_offset": 2048, 00:14:33.234 "data_size": 63488 00:14:33.234 }, 00:14:33.234 { 00:14:33.234 "name": "BaseBdev3", 00:14:33.234 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:33.234 "is_configured": true, 00:14:33.234 "data_offset": 2048, 00:14:33.234 "data_size": 63488 00:14:33.234 } 00:14:33.234 ] 00:14:33.234 }' 00:14:33.234 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.234 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.509 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.510 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.510 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.510 "name": "raid_bdev1", 00:14:33.510 "uuid": "24225a65-5178-433b-a9b6-bcc1ed8f5605", 00:14:33.510 "strip_size_kb": 64, 00:14:33.510 "state": "online", 00:14:33.510 "raid_level": "raid5f", 00:14:33.510 "superblock": true, 00:14:33.510 "num_base_bdevs": 3, 00:14:33.510 "num_base_bdevs_discovered": 2, 00:14:33.510 "num_base_bdevs_operational": 2, 00:14:33.510 "base_bdevs_list": [ 00:14:33.510 { 00:14:33.510 "name": null, 00:14:33.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.510 "is_configured": false, 00:14:33.510 "data_offset": 0, 00:14:33.510 "data_size": 63488 00:14:33.510 }, 00:14:33.510 { 00:14:33.510 "name": "BaseBdev2", 00:14:33.510 "uuid": "bf91d50a-73b5-5bbc-bdd2-205138963d37", 00:14:33.510 "is_configured": true, 00:14:33.510 "data_offset": 2048, 00:14:33.510 "data_size": 63488 00:14:33.510 }, 00:14:33.510 { 00:14:33.510 "name": "BaseBdev3", 00:14:33.510 "uuid": "c2d18b94-f30f-5147-9269-7c59ee515cbb", 00:14:33.510 "is_configured": true, 00:14:33.510 "data_offset": 2048, 00:14:33.510 "data_size": 63488 00:14:33.511 } 00:14:33.511 ] 00:14:33.511 }' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94152 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94152 ']' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94152 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94152 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.511 killing process with pid 94152 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94152' 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94152 00:14:33.511 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.511 00:14:33.511 Latency(us) 00:14:33.511 [2024-11-26T22:59:12.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.511 [2024-11-26T22:59:12.639Z] =================================================================================================================== 00:14:33.511 [2024-11-26T22:59:12.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.511 [2024-11-26 22:59:12.586084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.511 [2024-11-26 22:59:12.586198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.511 [2024-11-26 22:59:12.586274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.511 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94152 00:14:33.511 [2024-11-26 22:59:12.586286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:33.511 [2024-11-26 22:59:12.626886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.780 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:33.780 00:14:33.780 real 0m21.704s 00:14:33.780 user 0m28.251s 00:14:33.780 sys 0m2.874s 00:14:33.780 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.780 22:59:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.780 ************************************ 00:14:33.780 END TEST raid5f_rebuild_test_sb 00:14:33.780 ************************************ 00:14:33.780 22:59:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:33.780 22:59:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:33.780 22:59:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.780 22:59:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.780 22:59:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.040 ************************************ 00:14:34.040 START TEST raid5f_state_function_test 00:14:34.040 ************************************ 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94882 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:34.040 Process raid pid: 94882 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94882' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94882 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 94882 ']' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.040 22:59:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.040 [2024-11-26 22:59:13.015275] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:34.040 [2024-11-26 22:59:13.015878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.040 [2024-11-26 22:59:13.152649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:34.300 [2024-11-26 22:59:13.187968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.300 [2024-11-26 22:59:13.213585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.301 [2024-11-26 22:59:13.256132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.301 [2024-11-26 22:59:13.256176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.870 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.870 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.870 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:34.870 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 [2024-11-26 22:59:13.823957] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.871 [2024-11-26 22:59:13.824010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.871 [2024-11-26 22:59:13.824022] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.871 [2024-11-26 22:59:13.824030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.871 [2024-11-26 22:59:13.824040] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.871 [2024-11-26 22:59:13.824046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.871 [2024-11-26 22:59:13.824054] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.871 [2024-11-26 22:59:13.824060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.871 "name": "Existed_Raid", 00:14:34.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.871 "strip_size_kb": 64, 00:14:34.871 "state": "configuring", 00:14:34.871 "raid_level": "raid5f", 00:14:34.871 "superblock": false, 00:14:34.871 "num_base_bdevs": 4, 00:14:34.871 "num_base_bdevs_discovered": 0, 00:14:34.871 "num_base_bdevs_operational": 4, 00:14:34.871 "base_bdevs_list": [ 00:14:34.871 { 00:14:34.871 "name": "BaseBdev1", 00:14:34.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.871 "is_configured": false, 00:14:34.871 "data_offset": 0, 00:14:34.871 "data_size": 0 00:14:34.871 }, 00:14:34.871 { 00:14:34.871 "name": "BaseBdev2", 00:14:34.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.871 "is_configured": false, 00:14:34.871 "data_offset": 0, 00:14:34.871 "data_size": 0 00:14:34.871 }, 00:14:34.871 { 00:14:34.871 "name": "BaseBdev3", 00:14:34.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.871 "is_configured": false, 00:14:34.871 "data_offset": 0, 00:14:34.871 "data_size": 0 00:14:34.871 }, 00:14:34.871 { 00:14:34.871 "name": "BaseBdev4", 00:14:34.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.871 "is_configured": false, 00:14:34.871 "data_offset": 0, 00:14:34.871 "data_size": 0 00:14:34.871 } 00:14:34.871 ] 00:14:34.871 }' 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.871 22:59:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 [2024-11-26 22:59:14.275989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.442 [2024-11-26 22:59:14.276027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 [2024-11-26 22:59:14.288011] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.442 [2024-11-26 22:59:14.288051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.442 [2024-11-26 22:59:14.288062] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.442 [2024-11-26 22:59:14.288069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.442 [2024-11-26 22:59:14.288076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.442 [2024-11-26 22:59:14.288082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.442 [2024-11-26 22:59:14.288089] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.442 [2024-11-26 22:59:14.288095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 [2024-11-26 22:59:14.308968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.442 BaseBdev1 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.442 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 [ 00:14:35.442 { 00:14:35.442 "name": "BaseBdev1", 00:14:35.443 "aliases": [ 00:14:35.443 "69b30f2f-b838-4e69-a84f-2e3987322ab8" 00:14:35.443 ], 00:14:35.443 "product_name": "Malloc disk", 00:14:35.443 "block_size": 512, 00:14:35.443 "num_blocks": 65536, 00:14:35.443 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:35.443 "assigned_rate_limits": { 00:14:35.443 "rw_ios_per_sec": 0, 00:14:35.443 "rw_mbytes_per_sec": 0, 00:14:35.443 "r_mbytes_per_sec": 0, 00:14:35.443 "w_mbytes_per_sec": 0 00:14:35.443 }, 00:14:35.443 "claimed": true, 00:14:35.443 "claim_type": "exclusive_write", 00:14:35.443 "zoned": false, 00:14:35.443 "supported_io_types": { 00:14:35.443 "read": true, 00:14:35.443 "write": true, 00:14:35.443 "unmap": true, 00:14:35.443 "flush": true, 00:14:35.443 "reset": true, 00:14:35.443 "nvme_admin": false, 00:14:35.443 "nvme_io": false, 00:14:35.443 "nvme_io_md": false, 00:14:35.443 "write_zeroes": true, 00:14:35.443 "zcopy": true, 00:14:35.443 "get_zone_info": false, 00:14:35.443 "zone_management": false, 00:14:35.443 "zone_append": false, 00:14:35.443 "compare": false, 00:14:35.443 "compare_and_write": false, 00:14:35.443 "abort": true, 00:14:35.443 "seek_hole": false, 00:14:35.443 "seek_data": false, 00:14:35.443 "copy": true, 00:14:35.443 "nvme_iov_md": false 00:14:35.443 }, 00:14:35.443 "memory_domains": [ 00:14:35.443 { 00:14:35.443 "dma_device_id": "system", 00:14:35.443 "dma_device_type": 1 00:14:35.443 }, 00:14:35.443 { 00:14:35.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.443 "dma_device_type": 2 00:14:35.443 } 00:14:35.443 ], 00:14:35.443 "driver_specific": {} 00:14:35.443 } 00:14:35.443 ] 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.443 "name": "Existed_Raid", 00:14:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.443 "strip_size_kb": 64, 00:14:35.443 "state": "configuring", 00:14:35.443 "raid_level": "raid5f", 00:14:35.443 "superblock": false, 00:14:35.443 "num_base_bdevs": 4, 00:14:35.443 "num_base_bdevs_discovered": 1, 00:14:35.443 "num_base_bdevs_operational": 4, 00:14:35.443 "base_bdevs_list": [ 00:14:35.443 { 00:14:35.443 "name": "BaseBdev1", 00:14:35.443 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:35.443 "is_configured": true, 00:14:35.443 "data_offset": 0, 00:14:35.443 "data_size": 65536 00:14:35.443 }, 00:14:35.443 { 00:14:35.443 "name": "BaseBdev2", 00:14:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.443 "is_configured": false, 00:14:35.443 "data_offset": 0, 00:14:35.443 "data_size": 0 00:14:35.443 }, 00:14:35.443 { 00:14:35.443 "name": "BaseBdev3", 00:14:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.443 "is_configured": false, 00:14:35.443 "data_offset": 0, 00:14:35.443 "data_size": 0 00:14:35.443 }, 00:14:35.443 { 00:14:35.443 "name": "BaseBdev4", 00:14:35.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.443 "is_configured": false, 00:14:35.443 "data_offset": 0, 00:14:35.443 "data_size": 0 00:14:35.443 } 00:14:35.443 ] 00:14:35.443 }' 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.443 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 [2024-11-26 22:59:14.797098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.704 [2024-11-26 22:59:14.797144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 [2024-11-26 22:59:14.809149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.704 [2024-11-26 22:59:14.810949] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.704 [2024-11-26 22:59:14.811021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.704 [2024-11-26 22:59:14.811049] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.704 [2024-11-26 22:59:14.811070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.704 [2024-11-26 22:59:14.811089] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.704 [2024-11-26 22:59:14.811107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.704 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.965 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.965 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.965 "name": "Existed_Raid", 00:14:35.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.965 "strip_size_kb": 64, 00:14:35.965 "state": "configuring", 00:14:35.965 "raid_level": "raid5f", 00:14:35.965 "superblock": false, 00:14:35.965 "num_base_bdevs": 4, 00:14:35.965 "num_base_bdevs_discovered": 1, 00:14:35.965 "num_base_bdevs_operational": 4, 00:14:35.965 "base_bdevs_list": [ 00:14:35.965 { 00:14:35.965 "name": "BaseBdev1", 00:14:35.965 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:35.965 "is_configured": true, 00:14:35.965 "data_offset": 0, 00:14:35.965 "data_size": 65536 00:14:35.965 }, 00:14:35.965 { 00:14:35.965 "name": "BaseBdev2", 00:14:35.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.965 "is_configured": false, 00:14:35.965 "data_offset": 0, 00:14:35.965 "data_size": 0 00:14:35.965 }, 00:14:35.965 { 00:14:35.965 "name": "BaseBdev3", 00:14:35.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.965 "is_configured": false, 00:14:35.965 "data_offset": 0, 00:14:35.965 "data_size": 0 00:14:35.965 }, 00:14:35.965 { 00:14:35.965 "name": "BaseBdev4", 00:14:35.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.965 "is_configured": false, 00:14:35.965 "data_offset": 0, 00:14:35.965 "data_size": 0 00:14:35.965 } 00:14:35.965 ] 00:14:35.965 }' 00:14:35.965 22:59:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.965 22:59:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 [2024-11-26 22:59:15.296320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.225 BaseBdev2 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.225 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 [ 00:14:36.225 { 00:14:36.225 "name": "BaseBdev2", 00:14:36.225 "aliases": [ 00:14:36.225 "3d838621-3278-4480-ae2a-95a92b1a739a" 00:14:36.225 ], 00:14:36.225 "product_name": "Malloc disk", 00:14:36.225 "block_size": 512, 00:14:36.225 "num_blocks": 65536, 00:14:36.225 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:36.225 "assigned_rate_limits": { 00:14:36.225 "rw_ios_per_sec": 0, 00:14:36.225 "rw_mbytes_per_sec": 0, 00:14:36.225 "r_mbytes_per_sec": 0, 00:14:36.225 "w_mbytes_per_sec": 0 00:14:36.225 }, 00:14:36.225 "claimed": true, 00:14:36.225 "claim_type": "exclusive_write", 00:14:36.225 "zoned": false, 00:14:36.225 "supported_io_types": { 00:14:36.225 "read": true, 00:14:36.225 "write": true, 00:14:36.225 "unmap": true, 00:14:36.225 "flush": true, 00:14:36.225 "reset": true, 00:14:36.225 "nvme_admin": false, 00:14:36.225 "nvme_io": false, 00:14:36.225 "nvme_io_md": false, 00:14:36.225 "write_zeroes": true, 00:14:36.225 "zcopy": true, 00:14:36.226 "get_zone_info": false, 00:14:36.226 "zone_management": false, 00:14:36.226 "zone_append": false, 00:14:36.226 "compare": false, 00:14:36.226 "compare_and_write": false, 00:14:36.226 "abort": true, 00:14:36.226 "seek_hole": false, 00:14:36.226 "seek_data": false, 00:14:36.226 "copy": true, 00:14:36.226 "nvme_iov_md": false 00:14:36.226 }, 00:14:36.226 "memory_domains": [ 00:14:36.226 { 00:14:36.226 "dma_device_id": "system", 00:14:36.226 "dma_device_type": 1 00:14:36.226 }, 00:14:36.226 { 00:14:36.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.226 "dma_device_type": 2 00:14:36.226 } 00:14:36.226 ], 00:14:36.226 "driver_specific": {} 00:14:36.226 } 00:14:36.226 ] 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.226 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.486 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.486 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.486 "name": "Existed_Raid", 00:14:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.486 "strip_size_kb": 64, 00:14:36.486 "state": "configuring", 00:14:36.486 "raid_level": "raid5f", 00:14:36.486 "superblock": false, 00:14:36.486 "num_base_bdevs": 4, 00:14:36.486 "num_base_bdevs_discovered": 2, 00:14:36.486 "num_base_bdevs_operational": 4, 00:14:36.486 "base_bdevs_list": [ 00:14:36.486 { 00:14:36.486 "name": "BaseBdev1", 00:14:36.486 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:36.486 "is_configured": true, 00:14:36.486 "data_offset": 0, 00:14:36.486 "data_size": 65536 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "name": "BaseBdev2", 00:14:36.486 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:36.486 "is_configured": true, 00:14:36.486 "data_offset": 0, 00:14:36.486 "data_size": 65536 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "name": "BaseBdev3", 00:14:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.486 "is_configured": false, 00:14:36.486 "data_offset": 0, 00:14:36.486 "data_size": 0 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "name": "BaseBdev4", 00:14:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.486 "is_configured": false, 00:14:36.486 "data_offset": 0, 00:14:36.486 "data_size": 0 00:14:36.486 } 00:14:36.486 ] 00:14:36.486 }' 00:14:36.486 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.486 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.747 [2024-11-26 22:59:15.775043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.747 BaseBdev3 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:36.747 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.748 [ 00:14:36.748 { 00:14:36.748 "name": "BaseBdev3", 00:14:36.748 "aliases": [ 00:14:36.748 "ac989ba5-d505-4ee8-951b-e5ffe2f25564" 00:14:36.748 ], 00:14:36.748 "product_name": "Malloc disk", 00:14:36.748 "block_size": 512, 00:14:36.748 "num_blocks": 65536, 00:14:36.748 "uuid": "ac989ba5-d505-4ee8-951b-e5ffe2f25564", 00:14:36.748 "assigned_rate_limits": { 00:14:36.748 "rw_ios_per_sec": 0, 00:14:36.748 "rw_mbytes_per_sec": 0, 00:14:36.748 "r_mbytes_per_sec": 0, 00:14:36.748 "w_mbytes_per_sec": 0 00:14:36.748 }, 00:14:36.748 "claimed": true, 00:14:36.748 "claim_type": "exclusive_write", 00:14:36.748 "zoned": false, 00:14:36.748 "supported_io_types": { 00:14:36.748 "read": true, 00:14:36.748 "write": true, 00:14:36.748 "unmap": true, 00:14:36.748 "flush": true, 00:14:36.748 "reset": true, 00:14:36.748 "nvme_admin": false, 00:14:36.748 "nvme_io": false, 00:14:36.748 "nvme_io_md": false, 00:14:36.748 "write_zeroes": true, 00:14:36.748 "zcopy": true, 00:14:36.748 "get_zone_info": false, 00:14:36.748 "zone_management": false, 00:14:36.748 "zone_append": false, 00:14:36.748 "compare": false, 00:14:36.748 "compare_and_write": false, 00:14:36.748 "abort": true, 00:14:36.748 "seek_hole": false, 00:14:36.748 "seek_data": false, 00:14:36.748 "copy": true, 00:14:36.748 "nvme_iov_md": false 00:14:36.748 }, 00:14:36.748 "memory_domains": [ 00:14:36.748 { 00:14:36.748 "dma_device_id": "system", 00:14:36.748 "dma_device_type": 1 00:14:36.748 }, 00:14:36.748 { 00:14:36.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.748 "dma_device_type": 2 00:14:36.748 } 00:14:36.748 ], 00:14:36.748 "driver_specific": {} 00:14:36.748 } 00:14:36.748 ] 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.748 "name": "Existed_Raid", 00:14:36.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.748 "strip_size_kb": 64, 00:14:36.748 "state": "configuring", 00:14:36.748 "raid_level": "raid5f", 00:14:36.748 "superblock": false, 00:14:36.748 "num_base_bdevs": 4, 00:14:36.748 "num_base_bdevs_discovered": 3, 00:14:36.748 "num_base_bdevs_operational": 4, 00:14:36.748 "base_bdevs_list": [ 00:14:36.748 { 00:14:36.748 "name": "BaseBdev1", 00:14:36.748 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:36.748 "is_configured": true, 00:14:36.748 "data_offset": 0, 00:14:36.748 "data_size": 65536 00:14:36.748 }, 00:14:36.748 { 00:14:36.748 "name": "BaseBdev2", 00:14:36.748 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:36.748 "is_configured": true, 00:14:36.748 "data_offset": 0, 00:14:36.748 "data_size": 65536 00:14:36.748 }, 00:14:36.748 { 00:14:36.748 "name": "BaseBdev3", 00:14:36.748 "uuid": "ac989ba5-d505-4ee8-951b-e5ffe2f25564", 00:14:36.748 "is_configured": true, 00:14:36.748 "data_offset": 0, 00:14:36.748 "data_size": 65536 00:14:36.748 }, 00:14:36.748 { 00:14:36.748 "name": "BaseBdev4", 00:14:36.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.748 "is_configured": false, 00:14:36.748 "data_offset": 0, 00:14:36.748 "data_size": 0 00:14:36.748 } 00:14:36.748 ] 00:14:36.748 }' 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.748 22:59:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 [2024-11-26 22:59:16.226138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.317 [2024-11-26 22:59:16.226196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:37.317 [2024-11-26 22:59:16.226208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:37.317 [2024-11-26 22:59:16.226478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:37.317 [2024-11-26 22:59:16.226938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:37.317 [2024-11-26 22:59:16.226957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:37.317 [2024-11-26 22:59:16.227149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.317 BaseBdev4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 [ 00:14:37.317 { 00:14:37.317 "name": "BaseBdev4", 00:14:37.317 "aliases": [ 00:14:37.317 "a634b38e-7936-475d-bb4d-97ac81197273" 00:14:37.317 ], 00:14:37.317 "product_name": "Malloc disk", 00:14:37.317 "block_size": 512, 00:14:37.317 "num_blocks": 65536, 00:14:37.317 "uuid": "a634b38e-7936-475d-bb4d-97ac81197273", 00:14:37.317 "assigned_rate_limits": { 00:14:37.317 "rw_ios_per_sec": 0, 00:14:37.317 "rw_mbytes_per_sec": 0, 00:14:37.317 "r_mbytes_per_sec": 0, 00:14:37.317 "w_mbytes_per_sec": 0 00:14:37.317 }, 00:14:37.317 "claimed": true, 00:14:37.317 "claim_type": "exclusive_write", 00:14:37.317 "zoned": false, 00:14:37.317 "supported_io_types": { 00:14:37.317 "read": true, 00:14:37.317 "write": true, 00:14:37.317 "unmap": true, 00:14:37.317 "flush": true, 00:14:37.317 "reset": true, 00:14:37.317 "nvme_admin": false, 00:14:37.317 "nvme_io": false, 00:14:37.317 "nvme_io_md": false, 00:14:37.317 "write_zeroes": true, 00:14:37.317 "zcopy": true, 00:14:37.317 "get_zone_info": false, 00:14:37.317 "zone_management": false, 00:14:37.317 "zone_append": false, 00:14:37.317 "compare": false, 00:14:37.317 "compare_and_write": false, 00:14:37.317 "abort": true, 00:14:37.317 "seek_hole": false, 00:14:37.317 "seek_data": false, 00:14:37.317 "copy": true, 00:14:37.317 "nvme_iov_md": false 00:14:37.317 }, 00:14:37.317 "memory_domains": [ 00:14:37.317 { 00:14:37.317 "dma_device_id": "system", 00:14:37.317 "dma_device_type": 1 00:14:37.317 }, 00:14:37.317 { 00:14:37.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.317 "dma_device_type": 2 00:14:37.317 } 00:14:37.317 ], 00:14:37.317 "driver_specific": {} 00:14:37.317 } 00:14:37.317 ] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.317 "name": "Existed_Raid", 00:14:37.317 "uuid": "f2470a56-97fe-493e-83c4-a9c5b4b3cb41", 00:14:37.317 "strip_size_kb": 64, 00:14:37.317 "state": "online", 00:14:37.317 "raid_level": "raid5f", 00:14:37.317 "superblock": false, 00:14:37.317 "num_base_bdevs": 4, 00:14:37.317 "num_base_bdevs_discovered": 4, 00:14:37.317 "num_base_bdevs_operational": 4, 00:14:37.317 "base_bdevs_list": [ 00:14:37.317 { 00:14:37.317 "name": "BaseBdev1", 00:14:37.317 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:37.317 "is_configured": true, 00:14:37.317 "data_offset": 0, 00:14:37.318 "data_size": 65536 00:14:37.318 }, 00:14:37.318 { 00:14:37.318 "name": "BaseBdev2", 00:14:37.318 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:37.318 "is_configured": true, 00:14:37.318 "data_offset": 0, 00:14:37.318 "data_size": 65536 00:14:37.318 }, 00:14:37.318 { 00:14:37.318 "name": "BaseBdev3", 00:14:37.318 "uuid": "ac989ba5-d505-4ee8-951b-e5ffe2f25564", 00:14:37.318 "is_configured": true, 00:14:37.318 "data_offset": 0, 00:14:37.318 "data_size": 65536 00:14:37.318 }, 00:14:37.318 { 00:14:37.318 "name": "BaseBdev4", 00:14:37.318 "uuid": "a634b38e-7936-475d-bb4d-97ac81197273", 00:14:37.318 "is_configured": true, 00:14:37.318 "data_offset": 0, 00:14:37.318 "data_size": 65536 00:14:37.318 } 00:14:37.318 ] 00:14:37.318 }' 00:14:37.318 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.318 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.577 [2024-11-26 22:59:16.670466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.577 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.837 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.837 "name": "Existed_Raid", 00:14:37.837 "aliases": [ 00:14:37.837 "f2470a56-97fe-493e-83c4-a9c5b4b3cb41" 00:14:37.837 ], 00:14:37.837 "product_name": "Raid Volume", 00:14:37.837 "block_size": 512, 00:14:37.837 "num_blocks": 196608, 00:14:37.837 "uuid": "f2470a56-97fe-493e-83c4-a9c5b4b3cb41", 00:14:37.837 "assigned_rate_limits": { 00:14:37.837 "rw_ios_per_sec": 0, 00:14:37.837 "rw_mbytes_per_sec": 0, 00:14:37.837 "r_mbytes_per_sec": 0, 00:14:37.837 "w_mbytes_per_sec": 0 00:14:37.837 }, 00:14:37.837 "claimed": false, 00:14:37.837 "zoned": false, 00:14:37.837 "supported_io_types": { 00:14:37.837 "read": true, 00:14:37.837 "write": true, 00:14:37.837 "unmap": false, 00:14:37.837 "flush": false, 00:14:37.837 "reset": true, 00:14:37.837 "nvme_admin": false, 00:14:37.837 "nvme_io": false, 00:14:37.837 "nvme_io_md": false, 00:14:37.837 "write_zeroes": true, 00:14:37.837 "zcopy": false, 00:14:37.837 "get_zone_info": false, 00:14:37.837 "zone_management": false, 00:14:37.837 "zone_append": false, 00:14:37.837 "compare": false, 00:14:37.837 "compare_and_write": false, 00:14:37.837 "abort": false, 00:14:37.837 "seek_hole": false, 00:14:37.837 "seek_data": false, 00:14:37.837 "copy": false, 00:14:37.837 "nvme_iov_md": false 00:14:37.838 }, 00:14:37.838 "driver_specific": { 00:14:37.838 "raid": { 00:14:37.838 "uuid": "f2470a56-97fe-493e-83c4-a9c5b4b3cb41", 00:14:37.838 "strip_size_kb": 64, 00:14:37.838 "state": "online", 00:14:37.838 "raid_level": "raid5f", 00:14:37.838 "superblock": false, 00:14:37.838 "num_base_bdevs": 4, 00:14:37.838 "num_base_bdevs_discovered": 4, 00:14:37.838 "num_base_bdevs_operational": 4, 00:14:37.838 "base_bdevs_list": [ 00:14:37.838 { 00:14:37.838 "name": "BaseBdev1", 00:14:37.838 "uuid": "69b30f2f-b838-4e69-a84f-2e3987322ab8", 00:14:37.838 "is_configured": true, 00:14:37.838 "data_offset": 0, 00:14:37.838 "data_size": 65536 00:14:37.838 }, 00:14:37.838 { 00:14:37.838 "name": "BaseBdev2", 00:14:37.838 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:37.838 "is_configured": true, 00:14:37.838 "data_offset": 0, 00:14:37.838 "data_size": 65536 00:14:37.838 }, 00:14:37.838 { 00:14:37.838 "name": "BaseBdev3", 00:14:37.838 "uuid": "ac989ba5-d505-4ee8-951b-e5ffe2f25564", 00:14:37.838 "is_configured": true, 00:14:37.838 "data_offset": 0, 00:14:37.838 "data_size": 65536 00:14:37.838 }, 00:14:37.838 { 00:14:37.838 "name": "BaseBdev4", 00:14:37.838 "uuid": "a634b38e-7936-475d-bb4d-97ac81197273", 00:14:37.838 "is_configured": true, 00:14:37.838 "data_offset": 0, 00:14:37.838 "data_size": 65536 00:14:37.838 } 00:14:37.838 ] 00:14:37.838 } 00:14:37.838 } 00:14:37.838 }' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:37.838 BaseBdev2 00:14:37.838 BaseBdev3 00:14:37.838 BaseBdev4' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.838 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.099 [2024-11-26 22:59:16.978426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.099 22:59:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.099 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.099 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.099 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.099 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.099 "name": "Existed_Raid", 00:14:38.099 "uuid": "f2470a56-97fe-493e-83c4-a9c5b4b3cb41", 00:14:38.099 "strip_size_kb": 64, 00:14:38.099 "state": "online", 00:14:38.099 "raid_level": "raid5f", 00:14:38.099 "superblock": false, 00:14:38.099 "num_base_bdevs": 4, 00:14:38.099 "num_base_bdevs_discovered": 3, 00:14:38.099 "num_base_bdevs_operational": 3, 00:14:38.099 "base_bdevs_list": [ 00:14:38.099 { 00:14:38.099 "name": null, 00:14:38.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.099 "is_configured": false, 00:14:38.099 "data_offset": 0, 00:14:38.099 "data_size": 65536 00:14:38.099 }, 00:14:38.099 { 00:14:38.099 "name": "BaseBdev2", 00:14:38.099 "uuid": "3d838621-3278-4480-ae2a-95a92b1a739a", 00:14:38.099 "is_configured": true, 00:14:38.099 "data_offset": 0, 00:14:38.099 "data_size": 65536 00:14:38.099 }, 00:14:38.099 { 00:14:38.099 "name": "BaseBdev3", 00:14:38.099 "uuid": "ac989ba5-d505-4ee8-951b-e5ffe2f25564", 00:14:38.100 "is_configured": true, 00:14:38.100 "data_offset": 0, 00:14:38.100 "data_size": 65536 00:14:38.100 }, 00:14:38.100 { 00:14:38.100 "name": "BaseBdev4", 00:14:38.100 "uuid": "a634b38e-7936-475d-bb4d-97ac81197273", 00:14:38.100 "is_configured": true, 00:14:38.100 "data_offset": 0, 00:14:38.100 "data_size": 65536 00:14:38.100 } 00:14:38.100 ] 00:14:38.100 }' 00:14:38.100 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.100 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.360 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.360 [2024-11-26 22:59:17.485963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.360 [2024-11-26 22:59:17.486107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.621 [2024-11-26 22:59:17.497481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 [2024-11-26 22:59:17.549515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 [2024-11-26 22:59:17.620493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:38.621 [2024-11-26 22:59:17.620541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 BaseBdev2 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.621 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.621 [ 00:14:38.621 { 00:14:38.621 "name": "BaseBdev2", 00:14:38.621 "aliases": [ 00:14:38.621 "06c3e3e8-7812-4791-8f1e-43e2820da2cb" 00:14:38.621 ], 00:14:38.621 "product_name": "Malloc disk", 00:14:38.621 "block_size": 512, 00:14:38.621 "num_blocks": 65536, 00:14:38.621 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:38.621 "assigned_rate_limits": { 00:14:38.621 "rw_ios_per_sec": 0, 00:14:38.621 "rw_mbytes_per_sec": 0, 00:14:38.621 "r_mbytes_per_sec": 0, 00:14:38.621 "w_mbytes_per_sec": 0 00:14:38.621 }, 00:14:38.621 "claimed": false, 00:14:38.621 "zoned": false, 00:14:38.621 "supported_io_types": { 00:14:38.622 "read": true, 00:14:38.622 "write": true, 00:14:38.622 "unmap": true, 00:14:38.622 "flush": true, 00:14:38.622 "reset": true, 00:14:38.622 "nvme_admin": false, 00:14:38.622 "nvme_io": false, 00:14:38.622 "nvme_io_md": false, 00:14:38.622 "write_zeroes": true, 00:14:38.622 "zcopy": true, 00:14:38.622 "get_zone_info": false, 00:14:38.622 "zone_management": false, 00:14:38.622 "zone_append": false, 00:14:38.622 "compare": false, 00:14:38.622 "compare_and_write": false, 00:14:38.622 "abort": true, 00:14:38.622 "seek_hole": false, 00:14:38.622 "seek_data": false, 00:14:38.622 "copy": true, 00:14:38.622 "nvme_iov_md": false 00:14:38.622 }, 00:14:38.622 "memory_domains": [ 00:14:38.622 { 00:14:38.622 "dma_device_id": "system", 00:14:38.622 "dma_device_type": 1 00:14:38.622 }, 00:14:38.622 { 00:14:38.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.622 "dma_device_type": 2 00:14:38.622 } 00:14:38.622 ], 00:14:38.622 "driver_specific": {} 00:14:38.622 } 00:14:38.622 ] 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.622 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.882 BaseBdev3 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.882 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.882 [ 00:14:38.882 { 00:14:38.882 "name": "BaseBdev3", 00:14:38.882 "aliases": [ 00:14:38.883 "cf0df2b9-c03b-4510-a656-6056396545b6" 00:14:38.883 ], 00:14:38.883 "product_name": "Malloc disk", 00:14:38.883 "block_size": 512, 00:14:38.883 "num_blocks": 65536, 00:14:38.883 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:38.883 "assigned_rate_limits": { 00:14:38.883 "rw_ios_per_sec": 0, 00:14:38.883 "rw_mbytes_per_sec": 0, 00:14:38.883 "r_mbytes_per_sec": 0, 00:14:38.883 "w_mbytes_per_sec": 0 00:14:38.883 }, 00:14:38.883 "claimed": false, 00:14:38.883 "zoned": false, 00:14:38.883 "supported_io_types": { 00:14:38.883 "read": true, 00:14:38.883 "write": true, 00:14:38.883 "unmap": true, 00:14:38.883 "flush": true, 00:14:38.883 "reset": true, 00:14:38.883 "nvme_admin": false, 00:14:38.883 "nvme_io": false, 00:14:38.883 "nvme_io_md": false, 00:14:38.883 "write_zeroes": true, 00:14:38.883 "zcopy": true, 00:14:38.883 "get_zone_info": false, 00:14:38.883 "zone_management": false, 00:14:38.883 "zone_append": false, 00:14:38.883 "compare": false, 00:14:38.883 "compare_and_write": false, 00:14:38.883 "abort": true, 00:14:38.883 "seek_hole": false, 00:14:38.883 "seek_data": false, 00:14:38.883 "copy": true, 00:14:38.883 "nvme_iov_md": false 00:14:38.883 }, 00:14:38.883 "memory_domains": [ 00:14:38.883 { 00:14:38.883 "dma_device_id": "system", 00:14:38.883 "dma_device_type": 1 00:14:38.883 }, 00:14:38.883 { 00:14:38.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.883 "dma_device_type": 2 00:14:38.883 } 00:14:38.883 ], 00:14:38.883 "driver_specific": {} 00:14:38.883 } 00:14:38.883 ] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 BaseBdev4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 [ 00:14:38.883 { 00:14:38.883 "name": "BaseBdev4", 00:14:38.883 "aliases": [ 00:14:38.883 "b437d21e-ed6c-45e2-94d4-354049baaaf8" 00:14:38.883 ], 00:14:38.883 "product_name": "Malloc disk", 00:14:38.883 "block_size": 512, 00:14:38.883 "num_blocks": 65536, 00:14:38.883 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:38.883 "assigned_rate_limits": { 00:14:38.883 "rw_ios_per_sec": 0, 00:14:38.883 "rw_mbytes_per_sec": 0, 00:14:38.883 "r_mbytes_per_sec": 0, 00:14:38.883 "w_mbytes_per_sec": 0 00:14:38.883 }, 00:14:38.883 "claimed": false, 00:14:38.883 "zoned": false, 00:14:38.883 "supported_io_types": { 00:14:38.883 "read": true, 00:14:38.883 "write": true, 00:14:38.883 "unmap": true, 00:14:38.883 "flush": true, 00:14:38.883 "reset": true, 00:14:38.883 "nvme_admin": false, 00:14:38.883 "nvme_io": false, 00:14:38.883 "nvme_io_md": false, 00:14:38.883 "write_zeroes": true, 00:14:38.883 "zcopy": true, 00:14:38.883 "get_zone_info": false, 00:14:38.883 "zone_management": false, 00:14:38.883 "zone_append": false, 00:14:38.883 "compare": false, 00:14:38.883 "compare_and_write": false, 00:14:38.883 "abort": true, 00:14:38.883 "seek_hole": false, 00:14:38.883 "seek_data": false, 00:14:38.883 "copy": true, 00:14:38.883 "nvme_iov_md": false 00:14:38.883 }, 00:14:38.883 "memory_domains": [ 00:14:38.883 { 00:14:38.883 "dma_device_id": "system", 00:14:38.883 "dma_device_type": 1 00:14:38.883 }, 00:14:38.883 { 00:14:38.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.883 "dma_device_type": 2 00:14:38.883 } 00:14:38.883 ], 00:14:38.883 "driver_specific": {} 00:14:38.883 } 00:14:38.883 ] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 [2024-11-26 22:59:17.847625] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.883 [2024-11-26 22:59:17.847685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.883 [2024-11-26 22:59:17.847705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.883 [2024-11-26 22:59:17.849426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.883 [2024-11-26 22:59:17.849558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.883 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.883 "name": "Existed_Raid", 00:14:38.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.884 "strip_size_kb": 64, 00:14:38.884 "state": "configuring", 00:14:38.884 "raid_level": "raid5f", 00:14:38.884 "superblock": false, 00:14:38.884 "num_base_bdevs": 4, 00:14:38.884 "num_base_bdevs_discovered": 3, 00:14:38.884 "num_base_bdevs_operational": 4, 00:14:38.884 "base_bdevs_list": [ 00:14:38.884 { 00:14:38.884 "name": "BaseBdev1", 00:14:38.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.884 "is_configured": false, 00:14:38.884 "data_offset": 0, 00:14:38.884 "data_size": 0 00:14:38.884 }, 00:14:38.884 { 00:14:38.884 "name": "BaseBdev2", 00:14:38.884 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:38.884 "is_configured": true, 00:14:38.884 "data_offset": 0, 00:14:38.884 "data_size": 65536 00:14:38.884 }, 00:14:38.884 { 00:14:38.884 "name": "BaseBdev3", 00:14:38.884 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:38.884 "is_configured": true, 00:14:38.884 "data_offset": 0, 00:14:38.884 "data_size": 65536 00:14:38.884 }, 00:14:38.884 { 00:14:38.884 "name": "BaseBdev4", 00:14:38.884 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:38.884 "is_configured": true, 00:14:38.884 "data_offset": 0, 00:14:38.884 "data_size": 65536 00:14:38.884 } 00:14:38.884 ] 00:14:38.884 }' 00:14:38.884 22:59:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.884 22:59:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.455 [2024-11-26 22:59:18.303704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.455 "name": "Existed_Raid", 00:14:39.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.455 "strip_size_kb": 64, 00:14:39.455 "state": "configuring", 00:14:39.455 "raid_level": "raid5f", 00:14:39.455 "superblock": false, 00:14:39.455 "num_base_bdevs": 4, 00:14:39.455 "num_base_bdevs_discovered": 2, 00:14:39.455 "num_base_bdevs_operational": 4, 00:14:39.455 "base_bdevs_list": [ 00:14:39.455 { 00:14:39.455 "name": "BaseBdev1", 00:14:39.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.455 "is_configured": false, 00:14:39.455 "data_offset": 0, 00:14:39.455 "data_size": 0 00:14:39.455 }, 00:14:39.455 { 00:14:39.455 "name": null, 00:14:39.455 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:39.455 "is_configured": false, 00:14:39.455 "data_offset": 0, 00:14:39.455 "data_size": 65536 00:14:39.455 }, 00:14:39.455 { 00:14:39.455 "name": "BaseBdev3", 00:14:39.455 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:39.455 "is_configured": true, 00:14:39.455 "data_offset": 0, 00:14:39.455 "data_size": 65536 00:14:39.455 }, 00:14:39.455 { 00:14:39.455 "name": "BaseBdev4", 00:14:39.455 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:39.455 "is_configured": true, 00:14:39.455 "data_offset": 0, 00:14:39.455 "data_size": 65536 00:14:39.455 } 00:14:39.455 ] 00:14:39.455 }' 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.455 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.716 [2024-11-26 22:59:18.786786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.716 BaseBdev1 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.716 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.716 [ 00:14:39.716 { 00:14:39.716 "name": "BaseBdev1", 00:14:39.716 "aliases": [ 00:14:39.716 "f67a8089-cd54-4b19-89f3-537459ecb3a2" 00:14:39.716 ], 00:14:39.716 "product_name": "Malloc disk", 00:14:39.716 "block_size": 512, 00:14:39.716 "num_blocks": 65536, 00:14:39.716 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:39.716 "assigned_rate_limits": { 00:14:39.716 "rw_ios_per_sec": 0, 00:14:39.716 "rw_mbytes_per_sec": 0, 00:14:39.716 "r_mbytes_per_sec": 0, 00:14:39.716 "w_mbytes_per_sec": 0 00:14:39.716 }, 00:14:39.716 "claimed": true, 00:14:39.716 "claim_type": "exclusive_write", 00:14:39.716 "zoned": false, 00:14:39.716 "supported_io_types": { 00:14:39.716 "read": true, 00:14:39.716 "write": true, 00:14:39.716 "unmap": true, 00:14:39.716 "flush": true, 00:14:39.716 "reset": true, 00:14:39.716 "nvme_admin": false, 00:14:39.716 "nvme_io": false, 00:14:39.716 "nvme_io_md": false, 00:14:39.716 "write_zeroes": true, 00:14:39.716 "zcopy": true, 00:14:39.716 "get_zone_info": false, 00:14:39.716 "zone_management": false, 00:14:39.716 "zone_append": false, 00:14:39.716 "compare": false, 00:14:39.717 "compare_and_write": false, 00:14:39.717 "abort": true, 00:14:39.717 "seek_hole": false, 00:14:39.717 "seek_data": false, 00:14:39.717 "copy": true, 00:14:39.717 "nvme_iov_md": false 00:14:39.717 }, 00:14:39.717 "memory_domains": [ 00:14:39.717 { 00:14:39.717 "dma_device_id": "system", 00:14:39.717 "dma_device_type": 1 00:14:39.717 }, 00:14:39.717 { 00:14:39.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.717 "dma_device_type": 2 00:14:39.717 } 00:14:39.717 ], 00:14:39.717 "driver_specific": {} 00:14:39.717 } 00:14:39.717 ] 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.717 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.977 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.977 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.977 "name": "Existed_Raid", 00:14:39.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.977 "strip_size_kb": 64, 00:14:39.977 "state": "configuring", 00:14:39.977 "raid_level": "raid5f", 00:14:39.977 "superblock": false, 00:14:39.977 "num_base_bdevs": 4, 00:14:39.977 "num_base_bdevs_discovered": 3, 00:14:39.977 "num_base_bdevs_operational": 4, 00:14:39.977 "base_bdevs_list": [ 00:14:39.977 { 00:14:39.977 "name": "BaseBdev1", 00:14:39.977 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:39.977 "is_configured": true, 00:14:39.977 "data_offset": 0, 00:14:39.977 "data_size": 65536 00:14:39.977 }, 00:14:39.977 { 00:14:39.977 "name": null, 00:14:39.977 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:39.977 "is_configured": false, 00:14:39.977 "data_offset": 0, 00:14:39.977 "data_size": 65536 00:14:39.977 }, 00:14:39.977 { 00:14:39.977 "name": "BaseBdev3", 00:14:39.977 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:39.977 "is_configured": true, 00:14:39.977 "data_offset": 0, 00:14:39.977 "data_size": 65536 00:14:39.977 }, 00:14:39.977 { 00:14:39.977 "name": "BaseBdev4", 00:14:39.977 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:39.977 "is_configured": true, 00:14:39.977 "data_offset": 0, 00:14:39.977 "data_size": 65536 00:14:39.977 } 00:14:39.977 ] 00:14:39.977 }' 00:14:39.977 22:59:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.977 22:59:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.258 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.259 [2024-11-26 22:59:19.306965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.259 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.259 "name": "Existed_Raid", 00:14:40.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.259 "strip_size_kb": 64, 00:14:40.259 "state": "configuring", 00:14:40.259 "raid_level": "raid5f", 00:14:40.259 "superblock": false, 00:14:40.259 "num_base_bdevs": 4, 00:14:40.259 "num_base_bdevs_discovered": 2, 00:14:40.259 "num_base_bdevs_operational": 4, 00:14:40.259 "base_bdevs_list": [ 00:14:40.259 { 00:14:40.259 "name": "BaseBdev1", 00:14:40.259 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:40.259 "is_configured": true, 00:14:40.259 "data_offset": 0, 00:14:40.259 "data_size": 65536 00:14:40.259 }, 00:14:40.259 { 00:14:40.259 "name": null, 00:14:40.259 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:40.259 "is_configured": false, 00:14:40.260 "data_offset": 0, 00:14:40.260 "data_size": 65536 00:14:40.260 }, 00:14:40.260 { 00:14:40.260 "name": null, 00:14:40.260 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:40.260 "is_configured": false, 00:14:40.260 "data_offset": 0, 00:14:40.260 "data_size": 65536 00:14:40.260 }, 00:14:40.260 { 00:14:40.260 "name": "BaseBdev4", 00:14:40.260 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:40.260 "is_configured": true, 00:14:40.260 "data_offset": 0, 00:14:40.260 "data_size": 65536 00:14:40.260 } 00:14:40.260 ] 00:14:40.260 }' 00:14:40.260 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.260 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.831 [2024-11-26 22:59:19.859145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.831 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.832 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.832 "name": "Existed_Raid", 00:14:40.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.832 "strip_size_kb": 64, 00:14:40.832 "state": "configuring", 00:14:40.832 "raid_level": "raid5f", 00:14:40.832 "superblock": false, 00:14:40.832 "num_base_bdevs": 4, 00:14:40.832 "num_base_bdevs_discovered": 3, 00:14:40.832 "num_base_bdevs_operational": 4, 00:14:40.832 "base_bdevs_list": [ 00:14:40.832 { 00:14:40.832 "name": "BaseBdev1", 00:14:40.832 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:40.832 "is_configured": true, 00:14:40.832 "data_offset": 0, 00:14:40.832 "data_size": 65536 00:14:40.832 }, 00:14:40.832 { 00:14:40.832 "name": null, 00:14:40.832 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:40.832 "is_configured": false, 00:14:40.832 "data_offset": 0, 00:14:40.832 "data_size": 65536 00:14:40.832 }, 00:14:40.832 { 00:14:40.832 "name": "BaseBdev3", 00:14:40.832 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:40.832 "is_configured": true, 00:14:40.832 "data_offset": 0, 00:14:40.832 "data_size": 65536 00:14:40.832 }, 00:14:40.832 { 00:14:40.832 "name": "BaseBdev4", 00:14:40.832 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:40.832 "is_configured": true, 00:14:40.832 "data_offset": 0, 00:14:40.832 "data_size": 65536 00:14:40.832 } 00:14:40.832 ] 00:14:40.832 }' 00:14:40.832 22:59:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.832 22:59:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.401 [2024-11-26 22:59:20.395301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.401 "name": "Existed_Raid", 00:14:41.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.401 "strip_size_kb": 64, 00:14:41.401 "state": "configuring", 00:14:41.401 "raid_level": "raid5f", 00:14:41.401 "superblock": false, 00:14:41.401 "num_base_bdevs": 4, 00:14:41.401 "num_base_bdevs_discovered": 2, 00:14:41.401 "num_base_bdevs_operational": 4, 00:14:41.401 "base_bdevs_list": [ 00:14:41.401 { 00:14:41.401 "name": null, 00:14:41.401 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:41.401 "is_configured": false, 00:14:41.401 "data_offset": 0, 00:14:41.401 "data_size": 65536 00:14:41.401 }, 00:14:41.401 { 00:14:41.401 "name": null, 00:14:41.401 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:41.401 "is_configured": false, 00:14:41.401 "data_offset": 0, 00:14:41.401 "data_size": 65536 00:14:41.401 }, 00:14:41.401 { 00:14:41.401 "name": "BaseBdev3", 00:14:41.401 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:41.401 "is_configured": true, 00:14:41.401 "data_offset": 0, 00:14:41.401 "data_size": 65536 00:14:41.401 }, 00:14:41.401 { 00:14:41.401 "name": "BaseBdev4", 00:14:41.401 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:41.401 "is_configured": true, 00:14:41.401 "data_offset": 0, 00:14:41.401 "data_size": 65536 00:14:41.401 } 00:14:41.401 ] 00:14:41.401 }' 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.401 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.971 [2024-11-26 22:59:20.941996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.971 "name": "Existed_Raid", 00:14:41.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.971 "strip_size_kb": 64, 00:14:41.971 "state": "configuring", 00:14:41.971 "raid_level": "raid5f", 00:14:41.971 "superblock": false, 00:14:41.971 "num_base_bdevs": 4, 00:14:41.971 "num_base_bdevs_discovered": 3, 00:14:41.971 "num_base_bdevs_operational": 4, 00:14:41.971 "base_bdevs_list": [ 00:14:41.971 { 00:14:41.971 "name": null, 00:14:41.971 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:41.971 "is_configured": false, 00:14:41.971 "data_offset": 0, 00:14:41.971 "data_size": 65536 00:14:41.971 }, 00:14:41.971 { 00:14:41.971 "name": "BaseBdev2", 00:14:41.971 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:41.971 "is_configured": true, 00:14:41.971 "data_offset": 0, 00:14:41.971 "data_size": 65536 00:14:41.971 }, 00:14:41.971 { 00:14:41.971 "name": "BaseBdev3", 00:14:41.971 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:41.971 "is_configured": true, 00:14:41.971 "data_offset": 0, 00:14:41.971 "data_size": 65536 00:14:41.971 }, 00:14:41.971 { 00:14:41.971 "name": "BaseBdev4", 00:14:41.971 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:41.971 "is_configured": true, 00:14:41.971 "data_offset": 0, 00:14:41.971 "data_size": 65536 00:14:41.971 } 00:14:41.971 ] 00:14:41.971 }' 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.971 22:59:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f67a8089-cd54-4b19-89f3-537459ecb3a2 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 [2024-11-26 22:59:21.496942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:42.541 [2024-11-26 22:59:21.497038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.541 [2024-11-26 22:59:21.497064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:42.541 [2024-11-26 22:59:21.497346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:42.541 [2024-11-26 22:59:21.497817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.541 [2024-11-26 22:59:21.497863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:42.541 [2024-11-26 22:59:21.498067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.541 NewBaseBdev 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.541 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 [ 00:14:42.541 { 00:14:42.541 "name": "NewBaseBdev", 00:14:42.541 "aliases": [ 00:14:42.542 "f67a8089-cd54-4b19-89f3-537459ecb3a2" 00:14:42.542 ], 00:14:42.542 "product_name": "Malloc disk", 00:14:42.542 "block_size": 512, 00:14:42.542 "num_blocks": 65536, 00:14:42.542 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:42.542 "assigned_rate_limits": { 00:14:42.542 "rw_ios_per_sec": 0, 00:14:42.542 "rw_mbytes_per_sec": 0, 00:14:42.542 "r_mbytes_per_sec": 0, 00:14:42.542 "w_mbytes_per_sec": 0 00:14:42.542 }, 00:14:42.542 "claimed": true, 00:14:42.542 "claim_type": "exclusive_write", 00:14:42.542 "zoned": false, 00:14:42.542 "supported_io_types": { 00:14:42.542 "read": true, 00:14:42.542 "write": true, 00:14:42.542 "unmap": true, 00:14:42.542 "flush": true, 00:14:42.542 "reset": true, 00:14:42.542 "nvme_admin": false, 00:14:42.542 "nvme_io": false, 00:14:42.542 "nvme_io_md": false, 00:14:42.542 "write_zeroes": true, 00:14:42.542 "zcopy": true, 00:14:42.542 "get_zone_info": false, 00:14:42.542 "zone_management": false, 00:14:42.542 "zone_append": false, 00:14:42.542 "compare": false, 00:14:42.542 "compare_and_write": false, 00:14:42.542 "abort": true, 00:14:42.542 "seek_hole": false, 00:14:42.542 "seek_data": false, 00:14:42.542 "copy": true, 00:14:42.542 "nvme_iov_md": false 00:14:42.542 }, 00:14:42.542 "memory_domains": [ 00:14:42.542 { 00:14:42.542 "dma_device_id": "system", 00:14:42.542 "dma_device_type": 1 00:14:42.542 }, 00:14:42.542 { 00:14:42.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.542 "dma_device_type": 2 00:14:42.542 } 00:14:42.542 ], 00:14:42.542 "driver_specific": {} 00:14:42.542 } 00:14:42.542 ] 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.542 "name": "Existed_Raid", 00:14:42.542 "uuid": "bec460f4-8f74-4221-8512-ea92202a84c8", 00:14:42.542 "strip_size_kb": 64, 00:14:42.542 "state": "online", 00:14:42.542 "raid_level": "raid5f", 00:14:42.542 "superblock": false, 00:14:42.542 "num_base_bdevs": 4, 00:14:42.542 "num_base_bdevs_discovered": 4, 00:14:42.542 "num_base_bdevs_operational": 4, 00:14:42.542 "base_bdevs_list": [ 00:14:42.542 { 00:14:42.542 "name": "NewBaseBdev", 00:14:42.542 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:42.542 "is_configured": true, 00:14:42.542 "data_offset": 0, 00:14:42.542 "data_size": 65536 00:14:42.542 }, 00:14:42.542 { 00:14:42.542 "name": "BaseBdev2", 00:14:42.542 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:42.542 "is_configured": true, 00:14:42.542 "data_offset": 0, 00:14:42.542 "data_size": 65536 00:14:42.542 }, 00:14:42.542 { 00:14:42.542 "name": "BaseBdev3", 00:14:42.542 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:42.542 "is_configured": true, 00:14:42.542 "data_offset": 0, 00:14:42.542 "data_size": 65536 00:14:42.542 }, 00:14:42.542 { 00:14:42.542 "name": "BaseBdev4", 00:14:42.542 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:42.542 "is_configured": true, 00:14:42.542 "data_offset": 0, 00:14:42.542 "data_size": 65536 00:14:42.542 } 00:14:42.542 ] 00:14:42.542 }' 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.542 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 [2024-11-26 22:59:21.953271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.112 "name": "Existed_Raid", 00:14:43.112 "aliases": [ 00:14:43.112 "bec460f4-8f74-4221-8512-ea92202a84c8" 00:14:43.112 ], 00:14:43.112 "product_name": "Raid Volume", 00:14:43.112 "block_size": 512, 00:14:43.112 "num_blocks": 196608, 00:14:43.112 "uuid": "bec460f4-8f74-4221-8512-ea92202a84c8", 00:14:43.112 "assigned_rate_limits": { 00:14:43.112 "rw_ios_per_sec": 0, 00:14:43.112 "rw_mbytes_per_sec": 0, 00:14:43.112 "r_mbytes_per_sec": 0, 00:14:43.112 "w_mbytes_per_sec": 0 00:14:43.112 }, 00:14:43.112 "claimed": false, 00:14:43.112 "zoned": false, 00:14:43.112 "supported_io_types": { 00:14:43.112 "read": true, 00:14:43.112 "write": true, 00:14:43.112 "unmap": false, 00:14:43.112 "flush": false, 00:14:43.112 "reset": true, 00:14:43.112 "nvme_admin": false, 00:14:43.112 "nvme_io": false, 00:14:43.112 "nvme_io_md": false, 00:14:43.112 "write_zeroes": true, 00:14:43.112 "zcopy": false, 00:14:43.112 "get_zone_info": false, 00:14:43.112 "zone_management": false, 00:14:43.112 "zone_append": false, 00:14:43.112 "compare": false, 00:14:43.112 "compare_and_write": false, 00:14:43.112 "abort": false, 00:14:43.112 "seek_hole": false, 00:14:43.112 "seek_data": false, 00:14:43.112 "copy": false, 00:14:43.112 "nvme_iov_md": false 00:14:43.112 }, 00:14:43.112 "driver_specific": { 00:14:43.112 "raid": { 00:14:43.112 "uuid": "bec460f4-8f74-4221-8512-ea92202a84c8", 00:14:43.112 "strip_size_kb": 64, 00:14:43.112 "state": "online", 00:14:43.112 "raid_level": "raid5f", 00:14:43.112 "superblock": false, 00:14:43.112 "num_base_bdevs": 4, 00:14:43.112 "num_base_bdevs_discovered": 4, 00:14:43.112 "num_base_bdevs_operational": 4, 00:14:43.112 "base_bdevs_list": [ 00:14:43.112 { 00:14:43.112 "name": "NewBaseBdev", 00:14:43.112 "uuid": "f67a8089-cd54-4b19-89f3-537459ecb3a2", 00:14:43.112 "is_configured": true, 00:14:43.112 "data_offset": 0, 00:14:43.112 "data_size": 65536 00:14:43.112 }, 00:14:43.112 { 00:14:43.112 "name": "BaseBdev2", 00:14:43.112 "uuid": "06c3e3e8-7812-4791-8f1e-43e2820da2cb", 00:14:43.112 "is_configured": true, 00:14:43.112 "data_offset": 0, 00:14:43.112 "data_size": 65536 00:14:43.112 }, 00:14:43.112 { 00:14:43.112 "name": "BaseBdev3", 00:14:43.112 "uuid": "cf0df2b9-c03b-4510-a656-6056396545b6", 00:14:43.112 "is_configured": true, 00:14:43.112 "data_offset": 0, 00:14:43.112 "data_size": 65536 00:14:43.112 }, 00:14:43.112 { 00:14:43.112 "name": "BaseBdev4", 00:14:43.112 "uuid": "b437d21e-ed6c-45e2-94d4-354049baaaf8", 00:14:43.112 "is_configured": true, 00:14:43.112 "data_offset": 0, 00:14:43.112 "data_size": 65536 00:14:43.112 } 00:14:43.112 ] 00:14:43.112 } 00:14:43.112 } 00:14:43.112 }' 00:14:43.112 22:59:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:43.112 BaseBdev2 00:14:43.112 BaseBdev3 00:14:43.112 BaseBdev4' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.372 [2024-11-26 22:59:22.281171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.372 [2024-11-26 22:59:22.281197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.372 [2024-11-26 22:59:22.281271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.372 [2024-11-26 22:59:22.281520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.372 [2024-11-26 22:59:22.281540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94882 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 94882 ']' 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 94882 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94882 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94882' 00:14:43.372 killing process with pid 94882 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 94882 00:14:43.372 [2024-11-26 22:59:22.317532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.372 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 94882 00:14:43.372 [2024-11-26 22:59:22.357845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.633 ************************************ 00:14:43.633 END TEST raid5f_state_function_test 00:14:43.633 ************************************ 00:14:43.633 00:14:43.633 real 0m9.669s 00:14:43.633 user 0m16.498s 00:14:43.633 sys 0m2.166s 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.633 22:59:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:43.633 22:59:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.633 22:59:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.633 22:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.633 ************************************ 00:14:43.633 START TEST raid5f_state_function_test_sb 00:14:43.633 ************************************ 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95537 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95537' 00:14:43.633 Process raid pid: 95537 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95537 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95537 ']' 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.633 22:59:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.893 [2024-11-26 22:59:22.775245] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:43.893 [2024-11-26 22:59:22.775485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.893 [2024-11-26 22:59:22.916840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:43.893 [2024-11-26 22:59:22.954902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.893 [2024-11-26 22:59:22.981719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.153 [2024-11-26 22:59:23.025296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.153 [2024-11-26 22:59:23.025409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.723 [2024-11-26 22:59:23.589733] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.723 [2024-11-26 22:59:23.589837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.723 [2024-11-26 22:59:23.589866] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.723 [2024-11-26 22:59:23.589886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.723 [2024-11-26 22:59:23.589906] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.723 [2024-11-26 22:59:23.589924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.723 [2024-11-26 22:59:23.589941] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.723 [2024-11-26 22:59:23.589958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.723 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.724 "name": "Existed_Raid", 00:14:44.724 "uuid": "aa0104bd-e3af-4377-a8dd-199785c371bd", 00:14:44.724 "strip_size_kb": 64, 00:14:44.724 "state": "configuring", 00:14:44.724 "raid_level": "raid5f", 00:14:44.724 "superblock": true, 00:14:44.724 "num_base_bdevs": 4, 00:14:44.724 "num_base_bdevs_discovered": 0, 00:14:44.724 "num_base_bdevs_operational": 4, 00:14:44.724 "base_bdevs_list": [ 00:14:44.724 { 00:14:44.724 "name": "BaseBdev1", 00:14:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.724 "is_configured": false, 00:14:44.724 "data_offset": 0, 00:14:44.724 "data_size": 0 00:14:44.724 }, 00:14:44.724 { 00:14:44.724 "name": "BaseBdev2", 00:14:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.724 "is_configured": false, 00:14:44.724 "data_offset": 0, 00:14:44.724 "data_size": 0 00:14:44.724 }, 00:14:44.724 { 00:14:44.724 "name": "BaseBdev3", 00:14:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.724 "is_configured": false, 00:14:44.724 "data_offset": 0, 00:14:44.724 "data_size": 0 00:14:44.724 }, 00:14:44.724 { 00:14:44.724 "name": "BaseBdev4", 00:14:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.724 "is_configured": false, 00:14:44.724 "data_offset": 0, 00:14:44.724 "data_size": 0 00:14:44.724 } 00:14:44.724 ] 00:14:44.724 }' 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.724 22:59:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.984 [2024-11-26 22:59:24.077739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.984 [2024-11-26 22:59:24.077813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.984 [2024-11-26 22:59:24.089780] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.984 [2024-11-26 22:59:24.089854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.984 [2024-11-26 22:59:24.089867] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.984 [2024-11-26 22:59:24.089890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.984 [2024-11-26 22:59:24.089898] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.984 [2024-11-26 22:59:24.089905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.984 [2024-11-26 22:59:24.089912] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.984 [2024-11-26 22:59:24.089918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.984 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.244 [2024-11-26 22:59:24.110988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.244 BaseBdev1 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.244 [ 00:14:45.244 { 00:14:45.244 "name": "BaseBdev1", 00:14:45.244 "aliases": [ 00:14:45.244 "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1" 00:14:45.244 ], 00:14:45.244 "product_name": "Malloc disk", 00:14:45.244 "block_size": 512, 00:14:45.244 "num_blocks": 65536, 00:14:45.244 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:45.244 "assigned_rate_limits": { 00:14:45.244 "rw_ios_per_sec": 0, 00:14:45.244 "rw_mbytes_per_sec": 0, 00:14:45.244 "r_mbytes_per_sec": 0, 00:14:45.244 "w_mbytes_per_sec": 0 00:14:45.244 }, 00:14:45.244 "claimed": true, 00:14:45.244 "claim_type": "exclusive_write", 00:14:45.244 "zoned": false, 00:14:45.244 "supported_io_types": { 00:14:45.244 "read": true, 00:14:45.244 "write": true, 00:14:45.244 "unmap": true, 00:14:45.244 "flush": true, 00:14:45.244 "reset": true, 00:14:45.244 "nvme_admin": false, 00:14:45.244 "nvme_io": false, 00:14:45.244 "nvme_io_md": false, 00:14:45.244 "write_zeroes": true, 00:14:45.244 "zcopy": true, 00:14:45.244 "get_zone_info": false, 00:14:45.244 "zone_management": false, 00:14:45.244 "zone_append": false, 00:14:45.244 "compare": false, 00:14:45.244 "compare_and_write": false, 00:14:45.244 "abort": true, 00:14:45.244 "seek_hole": false, 00:14:45.244 "seek_data": false, 00:14:45.244 "copy": true, 00:14:45.244 "nvme_iov_md": false 00:14:45.244 }, 00:14:45.244 "memory_domains": [ 00:14:45.244 { 00:14:45.244 "dma_device_id": "system", 00:14:45.244 "dma_device_type": 1 00:14:45.244 }, 00:14:45.244 { 00:14:45.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.244 "dma_device_type": 2 00:14:45.244 } 00:14:45.244 ], 00:14:45.244 "driver_specific": {} 00:14:45.244 } 00:14:45.244 ] 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.244 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.245 "name": "Existed_Raid", 00:14:45.245 "uuid": "bf4f062a-0419-4b76-909a-a62d03b6dc84", 00:14:45.245 "strip_size_kb": 64, 00:14:45.245 "state": "configuring", 00:14:45.245 "raid_level": "raid5f", 00:14:45.245 "superblock": true, 00:14:45.245 "num_base_bdevs": 4, 00:14:45.245 "num_base_bdevs_discovered": 1, 00:14:45.245 "num_base_bdevs_operational": 4, 00:14:45.245 "base_bdevs_list": [ 00:14:45.245 { 00:14:45.245 "name": "BaseBdev1", 00:14:45.245 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:45.245 "is_configured": true, 00:14:45.245 "data_offset": 2048, 00:14:45.245 "data_size": 63488 00:14:45.245 }, 00:14:45.245 { 00:14:45.245 "name": "BaseBdev2", 00:14:45.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.245 "is_configured": false, 00:14:45.245 "data_offset": 0, 00:14:45.245 "data_size": 0 00:14:45.245 }, 00:14:45.245 { 00:14:45.245 "name": "BaseBdev3", 00:14:45.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.245 "is_configured": false, 00:14:45.245 "data_offset": 0, 00:14:45.245 "data_size": 0 00:14:45.245 }, 00:14:45.245 { 00:14:45.245 "name": "BaseBdev4", 00:14:45.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.245 "is_configured": false, 00:14:45.245 "data_offset": 0, 00:14:45.245 "data_size": 0 00:14:45.245 } 00:14:45.245 ] 00:14:45.245 }' 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.245 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.505 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.505 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.505 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.505 [2024-11-26 22:59:24.627142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.505 [2024-11-26 22:59:24.627196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.765 [2024-11-26 22:59:24.639198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.765 [2024-11-26 22:59:24.640944] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.765 [2024-11-26 22:59:24.640985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.765 [2024-11-26 22:59:24.640995] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.765 [2024-11-26 22:59:24.641003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.765 [2024-11-26 22:59:24.641010] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.765 [2024-11-26 22:59:24.641016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.765 "name": "Existed_Raid", 00:14:45.765 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:45.765 "strip_size_kb": 64, 00:14:45.765 "state": "configuring", 00:14:45.765 "raid_level": "raid5f", 00:14:45.765 "superblock": true, 00:14:45.765 "num_base_bdevs": 4, 00:14:45.765 "num_base_bdevs_discovered": 1, 00:14:45.765 "num_base_bdevs_operational": 4, 00:14:45.765 "base_bdevs_list": [ 00:14:45.765 { 00:14:45.765 "name": "BaseBdev1", 00:14:45.765 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:45.765 "is_configured": true, 00:14:45.765 "data_offset": 2048, 00:14:45.765 "data_size": 63488 00:14:45.765 }, 00:14:45.765 { 00:14:45.765 "name": "BaseBdev2", 00:14:45.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.765 "is_configured": false, 00:14:45.765 "data_offset": 0, 00:14:45.765 "data_size": 0 00:14:45.765 }, 00:14:45.765 { 00:14:45.765 "name": "BaseBdev3", 00:14:45.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.765 "is_configured": false, 00:14:45.765 "data_offset": 0, 00:14:45.765 "data_size": 0 00:14:45.765 }, 00:14:45.765 { 00:14:45.765 "name": "BaseBdev4", 00:14:45.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.765 "is_configured": false, 00:14:45.765 "data_offset": 0, 00:14:45.765 "data_size": 0 00:14:45.765 } 00:14:45.765 ] 00:14:45.765 }' 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.765 22:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.026 [2024-11-26 22:59:25.074201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.026 BaseBdev2 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.026 [ 00:14:46.026 { 00:14:46.026 "name": "BaseBdev2", 00:14:46.026 "aliases": [ 00:14:46.026 "5efebabb-3e10-4b85-b615-da8bd96ea029" 00:14:46.026 ], 00:14:46.026 "product_name": "Malloc disk", 00:14:46.026 "block_size": 512, 00:14:46.026 "num_blocks": 65536, 00:14:46.026 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:46.026 "assigned_rate_limits": { 00:14:46.026 "rw_ios_per_sec": 0, 00:14:46.026 "rw_mbytes_per_sec": 0, 00:14:46.026 "r_mbytes_per_sec": 0, 00:14:46.026 "w_mbytes_per_sec": 0 00:14:46.026 }, 00:14:46.026 "claimed": true, 00:14:46.026 "claim_type": "exclusive_write", 00:14:46.026 "zoned": false, 00:14:46.026 "supported_io_types": { 00:14:46.026 "read": true, 00:14:46.026 "write": true, 00:14:46.026 "unmap": true, 00:14:46.026 "flush": true, 00:14:46.026 "reset": true, 00:14:46.026 "nvme_admin": false, 00:14:46.026 "nvme_io": false, 00:14:46.026 "nvme_io_md": false, 00:14:46.026 "write_zeroes": true, 00:14:46.026 "zcopy": true, 00:14:46.026 "get_zone_info": false, 00:14:46.026 "zone_management": false, 00:14:46.026 "zone_append": false, 00:14:46.026 "compare": false, 00:14:46.026 "compare_and_write": false, 00:14:46.026 "abort": true, 00:14:46.026 "seek_hole": false, 00:14:46.026 "seek_data": false, 00:14:46.026 "copy": true, 00:14:46.026 "nvme_iov_md": false 00:14:46.026 }, 00:14:46.026 "memory_domains": [ 00:14:46.026 { 00:14:46.026 "dma_device_id": "system", 00:14:46.026 "dma_device_type": 1 00:14:46.026 }, 00:14:46.026 { 00:14:46.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.026 "dma_device_type": 2 00:14:46.026 } 00:14:46.026 ], 00:14:46.026 "driver_specific": {} 00:14:46.026 } 00:14:46.026 ] 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.026 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.286 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.286 "name": "Existed_Raid", 00:14:46.286 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:46.286 "strip_size_kb": 64, 00:14:46.286 "state": "configuring", 00:14:46.286 "raid_level": "raid5f", 00:14:46.286 "superblock": true, 00:14:46.286 "num_base_bdevs": 4, 00:14:46.286 "num_base_bdevs_discovered": 2, 00:14:46.286 "num_base_bdevs_operational": 4, 00:14:46.286 "base_bdevs_list": [ 00:14:46.286 { 00:14:46.286 "name": "BaseBdev1", 00:14:46.286 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:46.286 "is_configured": true, 00:14:46.286 "data_offset": 2048, 00:14:46.286 "data_size": 63488 00:14:46.286 }, 00:14:46.286 { 00:14:46.286 "name": "BaseBdev2", 00:14:46.286 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:46.286 "is_configured": true, 00:14:46.286 "data_offset": 2048, 00:14:46.286 "data_size": 63488 00:14:46.286 }, 00:14:46.286 { 00:14:46.286 "name": "BaseBdev3", 00:14:46.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.286 "is_configured": false, 00:14:46.286 "data_offset": 0, 00:14:46.286 "data_size": 0 00:14:46.286 }, 00:14:46.286 { 00:14:46.286 "name": "BaseBdev4", 00:14:46.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.286 "is_configured": false, 00:14:46.286 "data_offset": 0, 00:14:46.286 "data_size": 0 00:14:46.286 } 00:14:46.286 ] 00:14:46.286 }' 00:14:46.286 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.286 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 [2024-11-26 22:59:25.599692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.547 BaseBdev3 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 [ 00:14:46.547 { 00:14:46.547 "name": "BaseBdev3", 00:14:46.547 "aliases": [ 00:14:46.547 "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0" 00:14:46.547 ], 00:14:46.547 "product_name": "Malloc disk", 00:14:46.547 "block_size": 512, 00:14:46.547 "num_blocks": 65536, 00:14:46.547 "uuid": "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0", 00:14:46.547 "assigned_rate_limits": { 00:14:46.547 "rw_ios_per_sec": 0, 00:14:46.547 "rw_mbytes_per_sec": 0, 00:14:46.547 "r_mbytes_per_sec": 0, 00:14:46.547 "w_mbytes_per_sec": 0 00:14:46.547 }, 00:14:46.547 "claimed": true, 00:14:46.547 "claim_type": "exclusive_write", 00:14:46.547 "zoned": false, 00:14:46.547 "supported_io_types": { 00:14:46.547 "read": true, 00:14:46.547 "write": true, 00:14:46.547 "unmap": true, 00:14:46.547 "flush": true, 00:14:46.547 "reset": true, 00:14:46.547 "nvme_admin": false, 00:14:46.547 "nvme_io": false, 00:14:46.547 "nvme_io_md": false, 00:14:46.547 "write_zeroes": true, 00:14:46.547 "zcopy": true, 00:14:46.547 "get_zone_info": false, 00:14:46.547 "zone_management": false, 00:14:46.547 "zone_append": false, 00:14:46.547 "compare": false, 00:14:46.547 "compare_and_write": false, 00:14:46.547 "abort": true, 00:14:46.547 "seek_hole": false, 00:14:46.547 "seek_data": false, 00:14:46.547 "copy": true, 00:14:46.547 "nvme_iov_md": false 00:14:46.547 }, 00:14:46.547 "memory_domains": [ 00:14:46.547 { 00:14:46.547 "dma_device_id": "system", 00:14:46.547 "dma_device_type": 1 00:14:46.547 }, 00:14:46.547 { 00:14:46.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.547 "dma_device_type": 2 00:14:46.547 } 00:14:46.547 ], 00:14:46.547 "driver_specific": {} 00:14:46.547 } 00:14:46.547 ] 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.807 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.807 "name": "Existed_Raid", 00:14:46.807 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:46.807 "strip_size_kb": 64, 00:14:46.807 "state": "configuring", 00:14:46.807 "raid_level": "raid5f", 00:14:46.807 "superblock": true, 00:14:46.807 "num_base_bdevs": 4, 00:14:46.807 "num_base_bdevs_discovered": 3, 00:14:46.807 "num_base_bdevs_operational": 4, 00:14:46.807 "base_bdevs_list": [ 00:14:46.807 { 00:14:46.807 "name": "BaseBdev1", 00:14:46.807 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:46.807 "is_configured": true, 00:14:46.807 "data_offset": 2048, 00:14:46.807 "data_size": 63488 00:14:46.807 }, 00:14:46.807 { 00:14:46.807 "name": "BaseBdev2", 00:14:46.807 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:46.807 "is_configured": true, 00:14:46.807 "data_offset": 2048, 00:14:46.807 "data_size": 63488 00:14:46.807 }, 00:14:46.807 { 00:14:46.807 "name": "BaseBdev3", 00:14:46.807 "uuid": "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0", 00:14:46.807 "is_configured": true, 00:14:46.807 "data_offset": 2048, 00:14:46.807 "data_size": 63488 00:14:46.807 }, 00:14:46.807 { 00:14:46.807 "name": "BaseBdev4", 00:14:46.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.807 "is_configured": false, 00:14:46.807 "data_offset": 0, 00:14:46.807 "data_size": 0 00:14:46.807 } 00:14:46.807 ] 00:14:46.807 }' 00:14:46.807 22:59:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.807 22:59:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.068 [2024-11-26 22:59:26.078843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.068 [2024-11-26 22:59:26.079115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:47.068 [2024-11-26 22:59:26.079160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:47.068 [2024-11-26 22:59:26.079502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:47.068 BaseBdev4 00:14:47.068 [2024-11-26 22:59:26.080037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:47.068 [2024-11-26 22:59:26.080099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:47.068 [2024-11-26 22:59:26.080278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.068 [ 00:14:47.068 { 00:14:47.068 "name": "BaseBdev4", 00:14:47.068 "aliases": [ 00:14:47.068 "898eb906-5ade-4a61-b1b8-d0544036a4b1" 00:14:47.068 ], 00:14:47.068 "product_name": "Malloc disk", 00:14:47.068 "block_size": 512, 00:14:47.068 "num_blocks": 65536, 00:14:47.068 "uuid": "898eb906-5ade-4a61-b1b8-d0544036a4b1", 00:14:47.068 "assigned_rate_limits": { 00:14:47.068 "rw_ios_per_sec": 0, 00:14:47.068 "rw_mbytes_per_sec": 0, 00:14:47.068 "r_mbytes_per_sec": 0, 00:14:47.068 "w_mbytes_per_sec": 0 00:14:47.068 }, 00:14:47.068 "claimed": true, 00:14:47.068 "claim_type": "exclusive_write", 00:14:47.068 "zoned": false, 00:14:47.068 "supported_io_types": { 00:14:47.068 "read": true, 00:14:47.068 "write": true, 00:14:47.068 "unmap": true, 00:14:47.068 "flush": true, 00:14:47.068 "reset": true, 00:14:47.068 "nvme_admin": false, 00:14:47.068 "nvme_io": false, 00:14:47.068 "nvme_io_md": false, 00:14:47.068 "write_zeroes": true, 00:14:47.068 "zcopy": true, 00:14:47.068 "get_zone_info": false, 00:14:47.068 "zone_management": false, 00:14:47.068 "zone_append": false, 00:14:47.068 "compare": false, 00:14:47.068 "compare_and_write": false, 00:14:47.068 "abort": true, 00:14:47.068 "seek_hole": false, 00:14:47.068 "seek_data": false, 00:14:47.068 "copy": true, 00:14:47.068 "nvme_iov_md": false 00:14:47.068 }, 00:14:47.068 "memory_domains": [ 00:14:47.068 { 00:14:47.068 "dma_device_id": "system", 00:14:47.068 "dma_device_type": 1 00:14:47.068 }, 00:14:47.068 { 00:14:47.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.068 "dma_device_type": 2 00:14:47.068 } 00:14:47.068 ], 00:14:47.068 "driver_specific": {} 00:14:47.068 } 00:14:47.068 ] 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.068 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.069 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.069 "name": "Existed_Raid", 00:14:47.069 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:47.069 "strip_size_kb": 64, 00:14:47.069 "state": "online", 00:14:47.069 "raid_level": "raid5f", 00:14:47.069 "superblock": true, 00:14:47.069 "num_base_bdevs": 4, 00:14:47.069 "num_base_bdevs_discovered": 4, 00:14:47.069 "num_base_bdevs_operational": 4, 00:14:47.069 "base_bdevs_list": [ 00:14:47.069 { 00:14:47.069 "name": "BaseBdev1", 00:14:47.069 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:47.069 "is_configured": true, 00:14:47.069 "data_offset": 2048, 00:14:47.069 "data_size": 63488 00:14:47.069 }, 00:14:47.069 { 00:14:47.069 "name": "BaseBdev2", 00:14:47.069 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:47.069 "is_configured": true, 00:14:47.069 "data_offset": 2048, 00:14:47.069 "data_size": 63488 00:14:47.069 }, 00:14:47.069 { 00:14:47.069 "name": "BaseBdev3", 00:14:47.069 "uuid": "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0", 00:14:47.069 "is_configured": true, 00:14:47.069 "data_offset": 2048, 00:14:47.069 "data_size": 63488 00:14:47.069 }, 00:14:47.069 { 00:14:47.069 "name": "BaseBdev4", 00:14:47.069 "uuid": "898eb906-5ade-4a61-b1b8-d0544036a4b1", 00:14:47.069 "is_configured": true, 00:14:47.069 "data_offset": 2048, 00:14:47.069 "data_size": 63488 00:14:47.069 } 00:14:47.069 ] 00:14:47.069 }' 00:14:47.069 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.069 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.639 [2024-11-26 22:59:26.591186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.639 "name": "Existed_Raid", 00:14:47.639 "aliases": [ 00:14:47.639 "7fd2b138-ae0d-4249-b402-7c6f5c7b2143" 00:14:47.639 ], 00:14:47.639 "product_name": "Raid Volume", 00:14:47.639 "block_size": 512, 00:14:47.639 "num_blocks": 190464, 00:14:47.639 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:47.639 "assigned_rate_limits": { 00:14:47.639 "rw_ios_per_sec": 0, 00:14:47.639 "rw_mbytes_per_sec": 0, 00:14:47.639 "r_mbytes_per_sec": 0, 00:14:47.639 "w_mbytes_per_sec": 0 00:14:47.639 }, 00:14:47.639 "claimed": false, 00:14:47.639 "zoned": false, 00:14:47.639 "supported_io_types": { 00:14:47.639 "read": true, 00:14:47.639 "write": true, 00:14:47.639 "unmap": false, 00:14:47.639 "flush": false, 00:14:47.639 "reset": true, 00:14:47.639 "nvme_admin": false, 00:14:47.639 "nvme_io": false, 00:14:47.639 "nvme_io_md": false, 00:14:47.639 "write_zeroes": true, 00:14:47.639 "zcopy": false, 00:14:47.639 "get_zone_info": false, 00:14:47.639 "zone_management": false, 00:14:47.639 "zone_append": false, 00:14:47.639 "compare": false, 00:14:47.639 "compare_and_write": false, 00:14:47.639 "abort": false, 00:14:47.639 "seek_hole": false, 00:14:47.639 "seek_data": false, 00:14:47.639 "copy": false, 00:14:47.639 "nvme_iov_md": false 00:14:47.639 }, 00:14:47.639 "driver_specific": { 00:14:47.639 "raid": { 00:14:47.639 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:47.639 "strip_size_kb": 64, 00:14:47.639 "state": "online", 00:14:47.639 "raid_level": "raid5f", 00:14:47.639 "superblock": true, 00:14:47.639 "num_base_bdevs": 4, 00:14:47.639 "num_base_bdevs_discovered": 4, 00:14:47.639 "num_base_bdevs_operational": 4, 00:14:47.639 "base_bdevs_list": [ 00:14:47.639 { 00:14:47.639 "name": "BaseBdev1", 00:14:47.639 "uuid": "6b01f803-cb2e-46a9-b58f-d2ccb0a1bbd1", 00:14:47.639 "is_configured": true, 00:14:47.639 "data_offset": 2048, 00:14:47.639 "data_size": 63488 00:14:47.639 }, 00:14:47.639 { 00:14:47.639 "name": "BaseBdev2", 00:14:47.639 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:47.639 "is_configured": true, 00:14:47.639 "data_offset": 2048, 00:14:47.639 "data_size": 63488 00:14:47.639 }, 00:14:47.639 { 00:14:47.639 "name": "BaseBdev3", 00:14:47.639 "uuid": "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0", 00:14:47.639 "is_configured": true, 00:14:47.639 "data_offset": 2048, 00:14:47.639 "data_size": 63488 00:14:47.639 }, 00:14:47.639 { 00:14:47.639 "name": "BaseBdev4", 00:14:47.639 "uuid": "898eb906-5ade-4a61-b1b8-d0544036a4b1", 00:14:47.639 "is_configured": true, 00:14:47.639 "data_offset": 2048, 00:14:47.639 "data_size": 63488 00:14:47.639 } 00:14:47.639 ] 00:14:47.639 } 00:14:47.639 } 00:14:47.639 }' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.639 BaseBdev2 00:14:47.639 BaseBdev3 00:14:47.639 BaseBdev4' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.639 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 [2024-11-26 22:59:26.935132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.900 22:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.900 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.900 "name": "Existed_Raid", 00:14:47.900 "uuid": "7fd2b138-ae0d-4249-b402-7c6f5c7b2143", 00:14:47.900 "strip_size_kb": 64, 00:14:47.900 "state": "online", 00:14:47.900 "raid_level": "raid5f", 00:14:47.900 "superblock": true, 00:14:47.900 "num_base_bdevs": 4, 00:14:47.900 "num_base_bdevs_discovered": 3, 00:14:47.900 "num_base_bdevs_operational": 3, 00:14:47.900 "base_bdevs_list": [ 00:14:47.901 { 00:14:47.901 "name": null, 00:14:47.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.901 "is_configured": false, 00:14:47.901 "data_offset": 0, 00:14:47.901 "data_size": 63488 00:14:47.901 }, 00:14:47.901 { 00:14:47.901 "name": "BaseBdev2", 00:14:47.901 "uuid": "5efebabb-3e10-4b85-b615-da8bd96ea029", 00:14:47.901 "is_configured": true, 00:14:47.901 "data_offset": 2048, 00:14:47.901 "data_size": 63488 00:14:47.901 }, 00:14:47.901 { 00:14:47.901 "name": "BaseBdev3", 00:14:47.901 "uuid": "c0ddb112-0dbb-4700-84f7-fc4edbbc90e0", 00:14:47.901 "is_configured": true, 00:14:47.901 "data_offset": 2048, 00:14:47.901 "data_size": 63488 00:14:47.901 }, 00:14:47.901 { 00:14:47.901 "name": "BaseBdev4", 00:14:47.901 "uuid": "898eb906-5ade-4a61-b1b8-d0544036a4b1", 00:14:47.901 "is_configured": true, 00:14:47.901 "data_offset": 2048, 00:14:47.901 "data_size": 63488 00:14:47.901 } 00:14:47.901 ] 00:14:47.901 }' 00:14:47.901 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.901 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.472 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.473 [2024-11-26 22:59:27.458731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.473 [2024-11-26 22:59:27.458873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.473 [2024-11-26 22:59:27.469830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.473 [2024-11-26 22:59:27.525868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.473 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.473 [2024-11-26 22:59:27.596932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:48.473 [2024-11-26 22:59:27.596981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 BaseBdev2 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 [ 00:14:48.734 { 00:14:48.734 "name": "BaseBdev2", 00:14:48.734 "aliases": [ 00:14:48.734 "0ef0805d-aad7-4ef2-9fa7-44266d44496c" 00:14:48.734 ], 00:14:48.734 "product_name": "Malloc disk", 00:14:48.734 "block_size": 512, 00:14:48.734 "num_blocks": 65536, 00:14:48.734 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:48.734 "assigned_rate_limits": { 00:14:48.734 "rw_ios_per_sec": 0, 00:14:48.734 "rw_mbytes_per_sec": 0, 00:14:48.734 "r_mbytes_per_sec": 0, 00:14:48.734 "w_mbytes_per_sec": 0 00:14:48.734 }, 00:14:48.734 "claimed": false, 00:14:48.734 "zoned": false, 00:14:48.734 "supported_io_types": { 00:14:48.734 "read": true, 00:14:48.734 "write": true, 00:14:48.734 "unmap": true, 00:14:48.734 "flush": true, 00:14:48.734 "reset": true, 00:14:48.734 "nvme_admin": false, 00:14:48.734 "nvme_io": false, 00:14:48.734 "nvme_io_md": false, 00:14:48.734 "write_zeroes": true, 00:14:48.734 "zcopy": true, 00:14:48.734 "get_zone_info": false, 00:14:48.734 "zone_management": false, 00:14:48.734 "zone_append": false, 00:14:48.734 "compare": false, 00:14:48.734 "compare_and_write": false, 00:14:48.734 "abort": true, 00:14:48.734 "seek_hole": false, 00:14:48.734 "seek_data": false, 00:14:48.734 "copy": true, 00:14:48.734 "nvme_iov_md": false 00:14:48.734 }, 00:14:48.734 "memory_domains": [ 00:14:48.734 { 00:14:48.734 "dma_device_id": "system", 00:14:48.734 "dma_device_type": 1 00:14:48.734 }, 00:14:48.734 { 00:14:48.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.734 "dma_device_type": 2 00:14:48.734 } 00:14:48.734 ], 00:14:48.734 "driver_specific": {} 00:14:48.734 } 00:14:48.734 ] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 BaseBdev3 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.734 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.735 [ 00:14:48.735 { 00:14:48.735 "name": "BaseBdev3", 00:14:48.735 "aliases": [ 00:14:48.735 "68f65a59-2401-437b-9e80-b62909c18ff8" 00:14:48.735 ], 00:14:48.735 "product_name": "Malloc disk", 00:14:48.735 "block_size": 512, 00:14:48.735 "num_blocks": 65536, 00:14:48.735 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:48.735 "assigned_rate_limits": { 00:14:48.735 "rw_ios_per_sec": 0, 00:14:48.735 "rw_mbytes_per_sec": 0, 00:14:48.735 "r_mbytes_per_sec": 0, 00:14:48.735 "w_mbytes_per_sec": 0 00:14:48.735 }, 00:14:48.735 "claimed": false, 00:14:48.735 "zoned": false, 00:14:48.735 "supported_io_types": { 00:14:48.735 "read": true, 00:14:48.735 "write": true, 00:14:48.735 "unmap": true, 00:14:48.735 "flush": true, 00:14:48.735 "reset": true, 00:14:48.735 "nvme_admin": false, 00:14:48.735 "nvme_io": false, 00:14:48.735 "nvme_io_md": false, 00:14:48.735 "write_zeroes": true, 00:14:48.735 "zcopy": true, 00:14:48.735 "get_zone_info": false, 00:14:48.735 "zone_management": false, 00:14:48.735 "zone_append": false, 00:14:48.735 "compare": false, 00:14:48.735 "compare_and_write": false, 00:14:48.735 "abort": true, 00:14:48.735 "seek_hole": false, 00:14:48.735 "seek_data": false, 00:14:48.735 "copy": true, 00:14:48.735 "nvme_iov_md": false 00:14:48.735 }, 00:14:48.735 "memory_domains": [ 00:14:48.735 { 00:14:48.735 "dma_device_id": "system", 00:14:48.735 "dma_device_type": 1 00:14:48.735 }, 00:14:48.735 { 00:14:48.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.735 "dma_device_type": 2 00:14:48.735 } 00:14:48.735 ], 00:14:48.735 "driver_specific": {} 00:14:48.735 } 00:14:48.735 ] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.735 BaseBdev4 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.735 [ 00:14:48.735 { 00:14:48.735 "name": "BaseBdev4", 00:14:48.735 "aliases": [ 00:14:48.735 "77c84040-26d1-4199-978d-6acc5c069752" 00:14:48.735 ], 00:14:48.735 "product_name": "Malloc disk", 00:14:48.735 "block_size": 512, 00:14:48.735 "num_blocks": 65536, 00:14:48.735 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:48.735 "assigned_rate_limits": { 00:14:48.735 "rw_ios_per_sec": 0, 00:14:48.735 "rw_mbytes_per_sec": 0, 00:14:48.735 "r_mbytes_per_sec": 0, 00:14:48.735 "w_mbytes_per_sec": 0 00:14:48.735 }, 00:14:48.735 "claimed": false, 00:14:48.735 "zoned": false, 00:14:48.735 "supported_io_types": { 00:14:48.735 "read": true, 00:14:48.735 "write": true, 00:14:48.735 "unmap": true, 00:14:48.735 "flush": true, 00:14:48.735 "reset": true, 00:14:48.735 "nvme_admin": false, 00:14:48.735 "nvme_io": false, 00:14:48.735 "nvme_io_md": false, 00:14:48.735 "write_zeroes": true, 00:14:48.735 "zcopy": true, 00:14:48.735 "get_zone_info": false, 00:14:48.735 "zone_management": false, 00:14:48.735 "zone_append": false, 00:14:48.735 "compare": false, 00:14:48.735 "compare_and_write": false, 00:14:48.735 "abort": true, 00:14:48.735 "seek_hole": false, 00:14:48.735 "seek_data": false, 00:14:48.735 "copy": true, 00:14:48.735 "nvme_iov_md": false 00:14:48.735 }, 00:14:48.735 "memory_domains": [ 00:14:48.735 { 00:14:48.735 "dma_device_id": "system", 00:14:48.735 "dma_device_type": 1 00:14:48.735 }, 00:14:48.735 { 00:14:48.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.735 "dma_device_type": 2 00:14:48.735 } 00:14:48.735 ], 00:14:48.735 "driver_specific": {} 00:14:48.735 } 00:14:48.735 ] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.735 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.735 [2024-11-26 22:59:27.824085] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.735 [2024-11-26 22:59:27.824194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.735 [2024-11-26 22:59:27.824216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.735 [2024-11-26 22:59:27.825932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.736 [2024-11-26 22:59:27.825981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.736 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.996 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.996 "name": "Existed_Raid", 00:14:48.996 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:48.996 "strip_size_kb": 64, 00:14:48.996 "state": "configuring", 00:14:48.996 "raid_level": "raid5f", 00:14:48.996 "superblock": true, 00:14:48.996 "num_base_bdevs": 4, 00:14:48.996 "num_base_bdevs_discovered": 3, 00:14:48.996 "num_base_bdevs_operational": 4, 00:14:48.996 "base_bdevs_list": [ 00:14:48.996 { 00:14:48.996 "name": "BaseBdev1", 00:14:48.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.996 "is_configured": false, 00:14:48.996 "data_offset": 0, 00:14:48.996 "data_size": 0 00:14:48.996 }, 00:14:48.996 { 00:14:48.996 "name": "BaseBdev2", 00:14:48.996 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:48.996 "is_configured": true, 00:14:48.996 "data_offset": 2048, 00:14:48.996 "data_size": 63488 00:14:48.996 }, 00:14:48.996 { 00:14:48.996 "name": "BaseBdev3", 00:14:48.996 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:48.996 "is_configured": true, 00:14:48.996 "data_offset": 2048, 00:14:48.996 "data_size": 63488 00:14:48.996 }, 00:14:48.996 { 00:14:48.996 "name": "BaseBdev4", 00:14:48.996 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:48.997 "is_configured": true, 00:14:48.997 "data_offset": 2048, 00:14:48.997 "data_size": 63488 00:14:48.997 } 00:14:48.997 ] 00:14:48.997 }' 00:14:48.997 22:59:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.997 22:59:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.257 [2024-11-26 22:59:28.280176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.257 "name": "Existed_Raid", 00:14:49.257 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:49.257 "strip_size_kb": 64, 00:14:49.257 "state": "configuring", 00:14:49.257 "raid_level": "raid5f", 00:14:49.257 "superblock": true, 00:14:49.257 "num_base_bdevs": 4, 00:14:49.257 "num_base_bdevs_discovered": 2, 00:14:49.257 "num_base_bdevs_operational": 4, 00:14:49.257 "base_bdevs_list": [ 00:14:49.257 { 00:14:49.257 "name": "BaseBdev1", 00:14:49.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.257 "is_configured": false, 00:14:49.257 "data_offset": 0, 00:14:49.257 "data_size": 0 00:14:49.257 }, 00:14:49.257 { 00:14:49.257 "name": null, 00:14:49.257 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:49.257 "is_configured": false, 00:14:49.257 "data_offset": 0, 00:14:49.257 "data_size": 63488 00:14:49.257 }, 00:14:49.257 { 00:14:49.257 "name": "BaseBdev3", 00:14:49.257 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:49.257 "is_configured": true, 00:14:49.257 "data_offset": 2048, 00:14:49.257 "data_size": 63488 00:14:49.257 }, 00:14:49.257 { 00:14:49.257 "name": "BaseBdev4", 00:14:49.257 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:49.257 "is_configured": true, 00:14:49.257 "data_offset": 2048, 00:14:49.257 "data_size": 63488 00:14:49.257 } 00:14:49.257 ] 00:14:49.257 }' 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.257 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 [2024-11-26 22:59:28.747152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.829 BaseBdev1 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 [ 00:14:49.829 { 00:14:49.829 "name": "BaseBdev1", 00:14:49.829 "aliases": [ 00:14:49.829 "c0a3aba9-d366-4a79-af17-4204883b534d" 00:14:49.829 ], 00:14:49.829 "product_name": "Malloc disk", 00:14:49.829 "block_size": 512, 00:14:49.829 "num_blocks": 65536, 00:14:49.829 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:49.829 "assigned_rate_limits": { 00:14:49.829 "rw_ios_per_sec": 0, 00:14:49.829 "rw_mbytes_per_sec": 0, 00:14:49.829 "r_mbytes_per_sec": 0, 00:14:49.829 "w_mbytes_per_sec": 0 00:14:49.829 }, 00:14:49.829 "claimed": true, 00:14:49.829 "claim_type": "exclusive_write", 00:14:49.829 "zoned": false, 00:14:49.829 "supported_io_types": { 00:14:49.829 "read": true, 00:14:49.829 "write": true, 00:14:49.829 "unmap": true, 00:14:49.829 "flush": true, 00:14:49.830 "reset": true, 00:14:49.830 "nvme_admin": false, 00:14:49.830 "nvme_io": false, 00:14:49.830 "nvme_io_md": false, 00:14:49.830 "write_zeroes": true, 00:14:49.830 "zcopy": true, 00:14:49.830 "get_zone_info": false, 00:14:49.830 "zone_management": false, 00:14:49.830 "zone_append": false, 00:14:49.830 "compare": false, 00:14:49.830 "compare_and_write": false, 00:14:49.830 "abort": true, 00:14:49.830 "seek_hole": false, 00:14:49.830 "seek_data": false, 00:14:49.830 "copy": true, 00:14:49.830 "nvme_iov_md": false 00:14:49.830 }, 00:14:49.830 "memory_domains": [ 00:14:49.830 { 00:14:49.830 "dma_device_id": "system", 00:14:49.830 "dma_device_type": 1 00:14:49.830 }, 00:14:49.830 { 00:14:49.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.830 "dma_device_type": 2 00:14:49.830 } 00:14:49.830 ], 00:14:49.830 "driver_specific": {} 00:14:49.830 } 00:14:49.830 ] 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.830 "name": "Existed_Raid", 00:14:49.830 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:49.830 "strip_size_kb": 64, 00:14:49.830 "state": "configuring", 00:14:49.830 "raid_level": "raid5f", 00:14:49.830 "superblock": true, 00:14:49.830 "num_base_bdevs": 4, 00:14:49.830 "num_base_bdevs_discovered": 3, 00:14:49.830 "num_base_bdevs_operational": 4, 00:14:49.830 "base_bdevs_list": [ 00:14:49.830 { 00:14:49.830 "name": "BaseBdev1", 00:14:49.830 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:49.830 "is_configured": true, 00:14:49.830 "data_offset": 2048, 00:14:49.830 "data_size": 63488 00:14:49.830 }, 00:14:49.830 { 00:14:49.830 "name": null, 00:14:49.830 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:49.830 "is_configured": false, 00:14:49.830 "data_offset": 0, 00:14:49.830 "data_size": 63488 00:14:49.830 }, 00:14:49.830 { 00:14:49.830 "name": "BaseBdev3", 00:14:49.830 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:49.830 "is_configured": true, 00:14:49.830 "data_offset": 2048, 00:14:49.830 "data_size": 63488 00:14:49.830 }, 00:14:49.830 { 00:14:49.830 "name": "BaseBdev4", 00:14:49.830 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:49.830 "is_configured": true, 00:14:49.830 "data_offset": 2048, 00:14:49.830 "data_size": 63488 00:14:49.830 } 00:14:49.830 ] 00:14:49.830 }' 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.830 22:59:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.098 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.098 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.099 [2024-11-26 22:59:29.203330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.099 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.376 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.376 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.376 "name": "Existed_Raid", 00:14:50.376 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:50.376 "strip_size_kb": 64, 00:14:50.376 "state": "configuring", 00:14:50.376 "raid_level": "raid5f", 00:14:50.376 "superblock": true, 00:14:50.376 "num_base_bdevs": 4, 00:14:50.376 "num_base_bdevs_discovered": 2, 00:14:50.376 "num_base_bdevs_operational": 4, 00:14:50.376 "base_bdevs_list": [ 00:14:50.376 { 00:14:50.376 "name": "BaseBdev1", 00:14:50.376 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:50.376 "is_configured": true, 00:14:50.376 "data_offset": 2048, 00:14:50.376 "data_size": 63488 00:14:50.377 }, 00:14:50.377 { 00:14:50.377 "name": null, 00:14:50.377 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:50.377 "is_configured": false, 00:14:50.377 "data_offset": 0, 00:14:50.377 "data_size": 63488 00:14:50.377 }, 00:14:50.377 { 00:14:50.377 "name": null, 00:14:50.377 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:50.377 "is_configured": false, 00:14:50.377 "data_offset": 0, 00:14:50.377 "data_size": 63488 00:14:50.377 }, 00:14:50.377 { 00:14:50.377 "name": "BaseBdev4", 00:14:50.377 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:50.377 "is_configured": true, 00:14:50.377 "data_offset": 2048, 00:14:50.377 "data_size": 63488 00:14:50.377 } 00:14:50.377 ] 00:14:50.377 }' 00:14:50.377 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.377 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.654 [2024-11-26 22:59:29.687484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.654 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.654 "name": "Existed_Raid", 00:14:50.654 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:50.654 "strip_size_kb": 64, 00:14:50.654 "state": "configuring", 00:14:50.654 "raid_level": "raid5f", 00:14:50.654 "superblock": true, 00:14:50.654 "num_base_bdevs": 4, 00:14:50.654 "num_base_bdevs_discovered": 3, 00:14:50.654 "num_base_bdevs_operational": 4, 00:14:50.654 "base_bdevs_list": [ 00:14:50.654 { 00:14:50.654 "name": "BaseBdev1", 00:14:50.654 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:50.654 "is_configured": true, 00:14:50.654 "data_offset": 2048, 00:14:50.654 "data_size": 63488 00:14:50.654 }, 00:14:50.654 { 00:14:50.654 "name": null, 00:14:50.654 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:50.654 "is_configured": false, 00:14:50.654 "data_offset": 0, 00:14:50.654 "data_size": 63488 00:14:50.654 }, 00:14:50.654 { 00:14:50.654 "name": "BaseBdev3", 00:14:50.654 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:50.654 "is_configured": true, 00:14:50.654 "data_offset": 2048, 00:14:50.655 "data_size": 63488 00:14:50.655 }, 00:14:50.655 { 00:14:50.655 "name": "BaseBdev4", 00:14:50.655 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:50.655 "is_configured": true, 00:14:50.655 "data_offset": 2048, 00:14:50.655 "data_size": 63488 00:14:50.655 } 00:14:50.655 ] 00:14:50.655 }' 00:14:50.655 22:59:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.655 22:59:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.223 [2024-11-26 22:59:30.127619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.223 "name": "Existed_Raid", 00:14:51.223 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:51.223 "strip_size_kb": 64, 00:14:51.223 "state": "configuring", 00:14:51.223 "raid_level": "raid5f", 00:14:51.223 "superblock": true, 00:14:51.223 "num_base_bdevs": 4, 00:14:51.223 "num_base_bdevs_discovered": 2, 00:14:51.223 "num_base_bdevs_operational": 4, 00:14:51.223 "base_bdevs_list": [ 00:14:51.223 { 00:14:51.223 "name": null, 00:14:51.223 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:51.223 "is_configured": false, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 63488 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": null, 00:14:51.223 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:51.223 "is_configured": false, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 63488 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": "BaseBdev3", 00:14:51.223 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:51.223 "is_configured": true, 00:14:51.223 "data_offset": 2048, 00:14:51.223 "data_size": 63488 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": "BaseBdev4", 00:14:51.223 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:51.223 "is_configured": true, 00:14:51.223 "data_offset": 2048, 00:14:51.223 "data_size": 63488 00:14:51.223 } 00:14:51.223 ] 00:14:51.223 }' 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.223 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.482 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.482 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.482 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 [2024-11-26 22:59:30.630370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.742 "name": "Existed_Raid", 00:14:51.742 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:51.742 "strip_size_kb": 64, 00:14:51.742 "state": "configuring", 00:14:51.742 "raid_level": "raid5f", 00:14:51.742 "superblock": true, 00:14:51.742 "num_base_bdevs": 4, 00:14:51.742 "num_base_bdevs_discovered": 3, 00:14:51.742 "num_base_bdevs_operational": 4, 00:14:51.742 "base_bdevs_list": [ 00:14:51.742 { 00:14:51.742 "name": null, 00:14:51.742 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:51.742 "is_configured": false, 00:14:51.742 "data_offset": 0, 00:14:51.742 "data_size": 63488 00:14:51.742 }, 00:14:51.742 { 00:14:51.742 "name": "BaseBdev2", 00:14:51.742 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:51.742 "is_configured": true, 00:14:51.742 "data_offset": 2048, 00:14:51.742 "data_size": 63488 00:14:51.742 }, 00:14:51.742 { 00:14:51.742 "name": "BaseBdev3", 00:14:51.742 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:51.742 "is_configured": true, 00:14:51.742 "data_offset": 2048, 00:14:51.742 "data_size": 63488 00:14:51.742 }, 00:14:51.742 { 00:14:51.742 "name": "BaseBdev4", 00:14:51.742 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:51.742 "is_configured": true, 00:14:51.742 "data_offset": 2048, 00:14:51.742 "data_size": 63488 00:14:51.742 } 00:14:51.742 ] 00:14:51.742 }' 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.742 22:59:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.002 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c0a3aba9-d366-4a79-af17-4204883b534d 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.261 [2024-11-26 22:59:31.165362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.261 [2024-11-26 22:59:31.165597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.261 [2024-11-26 22:59:31.165650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.261 [2024-11-26 22:59:31.165900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:52.261 NewBaseBdev 00:14:52.261 [2024-11-26 22:59:31.166376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.261 [2024-11-26 22:59:31.166396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:52.261 [2024-11-26 22:59:31.166493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.261 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.261 [ 00:14:52.261 { 00:14:52.261 "name": "NewBaseBdev", 00:14:52.261 "aliases": [ 00:14:52.261 "c0a3aba9-d366-4a79-af17-4204883b534d" 00:14:52.261 ], 00:14:52.261 "product_name": "Malloc disk", 00:14:52.261 "block_size": 512, 00:14:52.261 "num_blocks": 65536, 00:14:52.261 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:52.261 "assigned_rate_limits": { 00:14:52.261 "rw_ios_per_sec": 0, 00:14:52.261 "rw_mbytes_per_sec": 0, 00:14:52.261 "r_mbytes_per_sec": 0, 00:14:52.261 "w_mbytes_per_sec": 0 00:14:52.261 }, 00:14:52.261 "claimed": true, 00:14:52.261 "claim_type": "exclusive_write", 00:14:52.261 "zoned": false, 00:14:52.261 "supported_io_types": { 00:14:52.261 "read": true, 00:14:52.261 "write": true, 00:14:52.261 "unmap": true, 00:14:52.261 "flush": true, 00:14:52.261 "reset": true, 00:14:52.261 "nvme_admin": false, 00:14:52.261 "nvme_io": false, 00:14:52.261 "nvme_io_md": false, 00:14:52.261 "write_zeroes": true, 00:14:52.261 "zcopy": true, 00:14:52.261 "get_zone_info": false, 00:14:52.261 "zone_management": false, 00:14:52.261 "zone_append": false, 00:14:52.261 "compare": false, 00:14:52.261 "compare_and_write": false, 00:14:52.261 "abort": true, 00:14:52.261 "seek_hole": false, 00:14:52.261 "seek_data": false, 00:14:52.261 "copy": true, 00:14:52.261 "nvme_iov_md": false 00:14:52.261 }, 00:14:52.261 "memory_domains": [ 00:14:52.261 { 00:14:52.261 "dma_device_id": "system", 00:14:52.261 "dma_device_type": 1 00:14:52.261 }, 00:14:52.261 { 00:14:52.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.261 "dma_device_type": 2 00:14:52.261 } 00:14:52.261 ], 00:14:52.261 "driver_specific": {} 00:14:52.262 } 00:14:52.262 ] 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.262 "name": "Existed_Raid", 00:14:52.262 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:52.262 "strip_size_kb": 64, 00:14:52.262 "state": "online", 00:14:52.262 "raid_level": "raid5f", 00:14:52.262 "superblock": true, 00:14:52.262 "num_base_bdevs": 4, 00:14:52.262 "num_base_bdevs_discovered": 4, 00:14:52.262 "num_base_bdevs_operational": 4, 00:14:52.262 "base_bdevs_list": [ 00:14:52.262 { 00:14:52.262 "name": "NewBaseBdev", 00:14:52.262 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:52.262 "is_configured": true, 00:14:52.262 "data_offset": 2048, 00:14:52.262 "data_size": 63488 00:14:52.262 }, 00:14:52.262 { 00:14:52.262 "name": "BaseBdev2", 00:14:52.262 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:52.262 "is_configured": true, 00:14:52.262 "data_offset": 2048, 00:14:52.262 "data_size": 63488 00:14:52.262 }, 00:14:52.262 { 00:14:52.262 "name": "BaseBdev3", 00:14:52.262 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:52.262 "is_configured": true, 00:14:52.262 "data_offset": 2048, 00:14:52.262 "data_size": 63488 00:14:52.262 }, 00:14:52.262 { 00:14:52.262 "name": "BaseBdev4", 00:14:52.262 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:52.262 "is_configured": true, 00:14:52.262 "data_offset": 2048, 00:14:52.262 "data_size": 63488 00:14:52.262 } 00:14:52.262 ] 00:14:52.262 }' 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.262 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.831 [2024-11-26 22:59:31.669671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.831 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.831 "name": "Existed_Raid", 00:14:52.831 "aliases": [ 00:14:52.831 "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56" 00:14:52.831 ], 00:14:52.831 "product_name": "Raid Volume", 00:14:52.831 "block_size": 512, 00:14:52.832 "num_blocks": 190464, 00:14:52.832 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:52.832 "assigned_rate_limits": { 00:14:52.832 "rw_ios_per_sec": 0, 00:14:52.832 "rw_mbytes_per_sec": 0, 00:14:52.832 "r_mbytes_per_sec": 0, 00:14:52.832 "w_mbytes_per_sec": 0 00:14:52.832 }, 00:14:52.832 "claimed": false, 00:14:52.832 "zoned": false, 00:14:52.832 "supported_io_types": { 00:14:52.832 "read": true, 00:14:52.832 "write": true, 00:14:52.832 "unmap": false, 00:14:52.832 "flush": false, 00:14:52.832 "reset": true, 00:14:52.832 "nvme_admin": false, 00:14:52.832 "nvme_io": false, 00:14:52.832 "nvme_io_md": false, 00:14:52.832 "write_zeroes": true, 00:14:52.832 "zcopy": false, 00:14:52.832 "get_zone_info": false, 00:14:52.832 "zone_management": false, 00:14:52.832 "zone_append": false, 00:14:52.832 "compare": false, 00:14:52.832 "compare_and_write": false, 00:14:52.832 "abort": false, 00:14:52.832 "seek_hole": false, 00:14:52.832 "seek_data": false, 00:14:52.832 "copy": false, 00:14:52.832 "nvme_iov_md": false 00:14:52.832 }, 00:14:52.832 "driver_specific": { 00:14:52.832 "raid": { 00:14:52.832 "uuid": "ed3c3885-19a5-4a39-954e-3b4b7c1a2e56", 00:14:52.832 "strip_size_kb": 64, 00:14:52.832 "state": "online", 00:14:52.832 "raid_level": "raid5f", 00:14:52.832 "superblock": true, 00:14:52.832 "num_base_bdevs": 4, 00:14:52.832 "num_base_bdevs_discovered": 4, 00:14:52.832 "num_base_bdevs_operational": 4, 00:14:52.832 "base_bdevs_list": [ 00:14:52.832 { 00:14:52.832 "name": "NewBaseBdev", 00:14:52.832 "uuid": "c0a3aba9-d366-4a79-af17-4204883b534d", 00:14:52.832 "is_configured": true, 00:14:52.832 "data_offset": 2048, 00:14:52.832 "data_size": 63488 00:14:52.832 }, 00:14:52.832 { 00:14:52.832 "name": "BaseBdev2", 00:14:52.832 "uuid": "0ef0805d-aad7-4ef2-9fa7-44266d44496c", 00:14:52.832 "is_configured": true, 00:14:52.832 "data_offset": 2048, 00:14:52.832 "data_size": 63488 00:14:52.832 }, 00:14:52.832 { 00:14:52.832 "name": "BaseBdev3", 00:14:52.832 "uuid": "68f65a59-2401-437b-9e80-b62909c18ff8", 00:14:52.832 "is_configured": true, 00:14:52.832 "data_offset": 2048, 00:14:52.832 "data_size": 63488 00:14:52.832 }, 00:14:52.832 { 00:14:52.832 "name": "BaseBdev4", 00:14:52.832 "uuid": "77c84040-26d1-4199-978d-6acc5c069752", 00:14:52.832 "is_configured": true, 00:14:52.832 "data_offset": 2048, 00:14:52.832 "data_size": 63488 00:14:52.832 } 00:14:52.832 ] 00:14:52.832 } 00:14:52.832 } 00:14:52.832 }' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:52.832 BaseBdev2 00:14:52.832 BaseBdev3 00:14:52.832 BaseBdev4' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.832 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.093 22:59:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.093 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.093 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.093 22:59:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.093 [2024-11-26 22:59:32.005587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.093 [2024-11-26 22:59:32.005613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.093 [2024-11-26 22:59:32.005680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.093 [2024-11-26 22:59:32.005923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.093 [2024-11-26 22:59:32.005943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95537 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95537 ']' 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95537 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95537 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95537' 00:14:53.093 killing process with pid 95537 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95537 00:14:53.093 [2024-11-26 22:59:32.053257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.093 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95537 00:14:53.093 [2024-11-26 22:59:32.093306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.354 22:59:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:53.354 00:14:53.354 real 0m9.653s 00:14:53.354 user 0m16.400s 00:14:53.354 sys 0m2.161s 00:14:53.354 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.354 22:59:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.354 ************************************ 00:14:53.354 END TEST raid5f_state_function_test_sb 00:14:53.354 ************************************ 00:14:53.354 22:59:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:53.354 22:59:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:53.354 22:59:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.354 22:59:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.354 ************************************ 00:14:53.354 START TEST raid5f_superblock_test 00:14:53.354 ************************************ 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96185 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96185 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96185 ']' 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.354 22:59:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.615 [2024-11-26 22:59:32.490711] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:53.615 [2024-11-26 22:59:32.490841] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96185 ] 00:14:53.615 [2024-11-26 22:59:32.625045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:53.615 [2024-11-26 22:59:32.663578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.615 [2024-11-26 22:59:32.690046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.615 [2024-11-26 22:59:32.733205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.615 [2024-11-26 22:59:32.733244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.555 malloc1 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.555 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 [2024-11-26 22:59:33.338085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.556 [2024-11-26 22:59:33.338225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.556 [2024-11-26 22:59:33.338290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.556 [2024-11-26 22:59:33.338324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.556 [2024-11-26 22:59:33.340364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.556 [2024-11-26 22:59:33.340434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.556 pt1 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 malloc2 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 [2024-11-26 22:59:33.370576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.556 [2024-11-26 22:59:33.370637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.556 [2024-11-26 22:59:33.370653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:54.556 [2024-11-26 22:59:33.370661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.556 [2024-11-26 22:59:33.372563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.556 [2024-11-26 22:59:33.372651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.556 pt2 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 malloc3 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 [2024-11-26 22:59:33.399059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:54.556 [2024-11-26 22:59:33.399159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.556 [2024-11-26 22:59:33.399194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:54.556 [2024-11-26 22:59:33.399221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.556 [2024-11-26 22:59:33.401186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.556 [2024-11-26 22:59:33.401264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:54.556 pt3 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 malloc4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 [2024-11-26 22:59:33.450010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:54.556 [2024-11-26 22:59:33.450204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.556 [2024-11-26 22:59:33.450321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:54.556 [2024-11-26 22:59:33.450397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.556 [2024-11-26 22:59:33.454827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.556 [2024-11-26 22:59:33.454959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:54.556 pt4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 [2024-11-26 22:59:33.463321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.556 [2024-11-26 22:59:33.466099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.556 [2024-11-26 22:59:33.466279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:54.556 [2024-11-26 22:59:33.466336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:54.556 [2024-11-26 22:59:33.466513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:54.556 [2024-11-26 22:59:33.466526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:54.556 [2024-11-26 22:59:33.466785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:54.556 [2024-11-26 22:59:33.467270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:54.556 [2024-11-26 22:59:33.467292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:54.556 [2024-11-26 22:59:33.467480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.556 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.556 "name": "raid_bdev1", 00:14:54.556 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:54.556 "strip_size_kb": 64, 00:14:54.556 "state": "online", 00:14:54.557 "raid_level": "raid5f", 00:14:54.557 "superblock": true, 00:14:54.557 "num_base_bdevs": 4, 00:14:54.557 "num_base_bdevs_discovered": 4, 00:14:54.557 "num_base_bdevs_operational": 4, 00:14:54.557 "base_bdevs_list": [ 00:14:54.557 { 00:14:54.557 "name": "pt1", 00:14:54.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.557 "is_configured": true, 00:14:54.557 "data_offset": 2048, 00:14:54.557 "data_size": 63488 00:14:54.557 }, 00:14:54.557 { 00:14:54.557 "name": "pt2", 00:14:54.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.557 "is_configured": true, 00:14:54.557 "data_offset": 2048, 00:14:54.557 "data_size": 63488 00:14:54.557 }, 00:14:54.557 { 00:14:54.557 "name": "pt3", 00:14:54.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.557 "is_configured": true, 00:14:54.557 "data_offset": 2048, 00:14:54.557 "data_size": 63488 00:14:54.557 }, 00:14:54.557 { 00:14:54.557 "name": "pt4", 00:14:54.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.557 "is_configured": true, 00:14:54.557 "data_offset": 2048, 00:14:54.557 "data_size": 63488 00:14:54.557 } 00:14:54.557 ] 00:14:54.557 }' 00:14:54.557 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.557 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.817 [2024-11-26 22:59:33.875722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.817 "name": "raid_bdev1", 00:14:54.817 "aliases": [ 00:14:54.817 "41defe35-7c6a-4613-a14f-984146be2f25" 00:14:54.817 ], 00:14:54.817 "product_name": "Raid Volume", 00:14:54.817 "block_size": 512, 00:14:54.817 "num_blocks": 190464, 00:14:54.817 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:54.817 "assigned_rate_limits": { 00:14:54.817 "rw_ios_per_sec": 0, 00:14:54.817 "rw_mbytes_per_sec": 0, 00:14:54.817 "r_mbytes_per_sec": 0, 00:14:54.817 "w_mbytes_per_sec": 0 00:14:54.817 }, 00:14:54.817 "claimed": false, 00:14:54.817 "zoned": false, 00:14:54.817 "supported_io_types": { 00:14:54.817 "read": true, 00:14:54.817 "write": true, 00:14:54.817 "unmap": false, 00:14:54.817 "flush": false, 00:14:54.817 "reset": true, 00:14:54.817 "nvme_admin": false, 00:14:54.817 "nvme_io": false, 00:14:54.817 "nvme_io_md": false, 00:14:54.817 "write_zeroes": true, 00:14:54.817 "zcopy": false, 00:14:54.817 "get_zone_info": false, 00:14:54.817 "zone_management": false, 00:14:54.817 "zone_append": false, 00:14:54.817 "compare": false, 00:14:54.817 "compare_and_write": false, 00:14:54.817 "abort": false, 00:14:54.817 "seek_hole": false, 00:14:54.817 "seek_data": false, 00:14:54.817 "copy": false, 00:14:54.817 "nvme_iov_md": false 00:14:54.817 }, 00:14:54.817 "driver_specific": { 00:14:54.817 "raid": { 00:14:54.817 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:54.817 "strip_size_kb": 64, 00:14:54.817 "state": "online", 00:14:54.817 "raid_level": "raid5f", 00:14:54.817 "superblock": true, 00:14:54.817 "num_base_bdevs": 4, 00:14:54.817 "num_base_bdevs_discovered": 4, 00:14:54.817 "num_base_bdevs_operational": 4, 00:14:54.817 "base_bdevs_list": [ 00:14:54.817 { 00:14:54.817 "name": "pt1", 00:14:54.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.817 "is_configured": true, 00:14:54.817 "data_offset": 2048, 00:14:54.817 "data_size": 63488 00:14:54.817 }, 00:14:54.817 { 00:14:54.817 "name": "pt2", 00:14:54.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.817 "is_configured": true, 00:14:54.817 "data_offset": 2048, 00:14:54.817 "data_size": 63488 00:14:54.817 }, 00:14:54.817 { 00:14:54.817 "name": "pt3", 00:14:54.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.817 "is_configured": true, 00:14:54.817 "data_offset": 2048, 00:14:54.817 "data_size": 63488 00:14:54.817 }, 00:14:54.817 { 00:14:54.817 "name": "pt4", 00:14:54.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.817 "is_configured": true, 00:14:54.817 "data_offset": 2048, 00:14:54.817 "data_size": 63488 00:14:54.817 } 00:14:54.817 ] 00:14:54.817 } 00:14:54.817 } 00:14:54.817 }' 00:14:54.817 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.077 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:55.077 pt2 00:14:55.077 pt3 00:14:55.077 pt4' 00:14:55.077 22:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.077 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.078 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 [2024-11-26 22:59:34.215794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=41defe35-7c6a-4613-a14f-984146be2f25 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 41defe35-7c6a-4613-a14f-984146be2f25 ']' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 [2024-11-26 22:59:34.239621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.338 [2024-11-26 22:59:34.239654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.338 [2024-11-26 22:59:34.239728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.338 [2024-11-26 22:59:34.239812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.338 [2024-11-26 22:59:34.239823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:55.338 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 [2024-11-26 22:59:34.399707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:55.339 [2024-11-26 22:59:34.401399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:55.339 [2024-11-26 22:59:34.401442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:55.339 [2024-11-26 22:59:34.401470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:55.339 [2024-11-26 22:59:34.401510] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:55.339 [2024-11-26 22:59:34.401550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:55.339 [2024-11-26 22:59:34.401566] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:55.339 [2024-11-26 22:59:34.401581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:55.339 [2024-11-26 22:59:34.401593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.339 [2024-11-26 22:59:34.401602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:14:55.339 request: 00:14:55.339 { 00:14:55.339 "name": "raid_bdev1", 00:14:55.339 "raid_level": "raid5f", 00:14:55.339 "base_bdevs": [ 00:14:55.339 "malloc1", 00:14:55.339 "malloc2", 00:14:55.339 "malloc3", 00:14:55.339 "malloc4" 00:14:55.339 ], 00:14:55.339 "strip_size_kb": 64, 00:14:55.339 "superblock": false, 00:14:55.339 "method": "bdev_raid_create", 00:14:55.339 "req_id": 1 00:14:55.339 } 00:14:55.339 Got JSON-RPC error response 00:14:55.339 response: 00:14:55.339 { 00:14:55.339 "code": -17, 00:14:55.339 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:55.339 } 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.339 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 [2024-11-26 22:59:34.459696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.339 [2024-11-26 22:59:34.459791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.339 [2024-11-26 22:59:34.459821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:55.339 [2024-11-26 22:59:34.459849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.339 [2024-11-26 22:59:34.461961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.339 [2024-11-26 22:59:34.462034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.339 [2024-11-26 22:59:34.462117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:55.339 [2024-11-26 22:59:34.462180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.599 pt1 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.599 "name": "raid_bdev1", 00:14:55.599 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:55.599 "strip_size_kb": 64, 00:14:55.599 "state": "configuring", 00:14:55.599 "raid_level": "raid5f", 00:14:55.599 "superblock": true, 00:14:55.599 "num_base_bdevs": 4, 00:14:55.599 "num_base_bdevs_discovered": 1, 00:14:55.599 "num_base_bdevs_operational": 4, 00:14:55.599 "base_bdevs_list": [ 00:14:55.599 { 00:14:55.599 "name": "pt1", 00:14:55.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.599 "is_configured": true, 00:14:55.599 "data_offset": 2048, 00:14:55.599 "data_size": 63488 00:14:55.599 }, 00:14:55.599 { 00:14:55.599 "name": null, 00:14:55.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.599 "is_configured": false, 00:14:55.599 "data_offset": 2048, 00:14:55.599 "data_size": 63488 00:14:55.599 }, 00:14:55.599 { 00:14:55.599 "name": null, 00:14:55.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.599 "is_configured": false, 00:14:55.599 "data_offset": 2048, 00:14:55.599 "data_size": 63488 00:14:55.599 }, 00:14:55.599 { 00:14:55.599 "name": null, 00:14:55.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.599 "is_configured": false, 00:14:55.599 "data_offset": 2048, 00:14:55.599 "data_size": 63488 00:14:55.599 } 00:14:55.599 ] 00:14:55.599 }' 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.599 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 [2024-11-26 22:59:34.851802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.859 [2024-11-26 22:59:34.851854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.859 [2024-11-26 22:59:34.851869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:55.859 [2024-11-26 22:59:34.851879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.859 [2024-11-26 22:59:34.852189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.859 [2024-11-26 22:59:34.852206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.859 [2024-11-26 22:59:34.852277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.859 [2024-11-26 22:59:34.852299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.859 pt2 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 [2024-11-26 22:59:34.863798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.859 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.859 "name": "raid_bdev1", 00:14:55.859 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:55.859 "strip_size_kb": 64, 00:14:55.859 "state": "configuring", 00:14:55.859 "raid_level": "raid5f", 00:14:55.859 "superblock": true, 00:14:55.859 "num_base_bdevs": 4, 00:14:55.859 "num_base_bdevs_discovered": 1, 00:14:55.859 "num_base_bdevs_operational": 4, 00:14:55.859 "base_bdevs_list": [ 00:14:55.860 { 00:14:55.860 "name": "pt1", 00:14:55.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.860 "is_configured": true, 00:14:55.860 "data_offset": 2048, 00:14:55.860 "data_size": 63488 00:14:55.860 }, 00:14:55.860 { 00:14:55.860 "name": null, 00:14:55.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.860 "is_configured": false, 00:14:55.860 "data_offset": 0, 00:14:55.860 "data_size": 63488 00:14:55.860 }, 00:14:55.860 { 00:14:55.860 "name": null, 00:14:55.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.860 "is_configured": false, 00:14:55.860 "data_offset": 2048, 00:14:55.860 "data_size": 63488 00:14:55.860 }, 00:14:55.860 { 00:14:55.860 "name": null, 00:14:55.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.860 "is_configured": false, 00:14:55.860 "data_offset": 2048, 00:14:55.860 "data_size": 63488 00:14:55.860 } 00:14:55.860 ] 00:14:55.860 }' 00:14:55.860 22:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.860 22:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.429 [2024-11-26 22:59:35.351933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.429 [2024-11-26 22:59:35.352038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.429 [2024-11-26 22:59:35.352069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:56.429 [2024-11-26 22:59:35.352093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.429 [2024-11-26 22:59:35.352455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.429 [2024-11-26 22:59:35.352508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.429 [2024-11-26 22:59:35.352591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:56.429 [2024-11-26 22:59:35.352636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.429 pt2 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.429 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.429 [2024-11-26 22:59:35.363931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.430 [2024-11-26 22:59:35.363978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.430 [2024-11-26 22:59:35.363994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:56.430 [2024-11-26 22:59:35.364001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.430 [2024-11-26 22:59:35.364310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.430 [2024-11-26 22:59:35.364327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.430 [2024-11-26 22:59:35.364377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:56.430 [2024-11-26 22:59:35.364419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.430 pt3 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.430 [2024-11-26 22:59:35.375936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:56.430 [2024-11-26 22:59:35.376019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.430 [2024-11-26 22:59:35.376036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:56.430 [2024-11-26 22:59:35.376044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.430 [2024-11-26 22:59:35.376334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.430 [2024-11-26 22:59:35.376350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:56.430 [2024-11-26 22:59:35.376399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:56.430 [2024-11-26 22:59:35.376415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:56.430 [2024-11-26 22:59:35.376516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:56.430 [2024-11-26 22:59:35.376526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.430 [2024-11-26 22:59:35.376733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:56.430 [2024-11-26 22:59:35.377156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:56.430 [2024-11-26 22:59:35.377175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:56.430 [2024-11-26 22:59:35.377280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.430 pt4 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.430 "name": "raid_bdev1", 00:14:56.430 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:56.430 "strip_size_kb": 64, 00:14:56.430 "state": "online", 00:14:56.430 "raid_level": "raid5f", 00:14:56.430 "superblock": true, 00:14:56.430 "num_base_bdevs": 4, 00:14:56.430 "num_base_bdevs_discovered": 4, 00:14:56.430 "num_base_bdevs_operational": 4, 00:14:56.430 "base_bdevs_list": [ 00:14:56.430 { 00:14:56.430 "name": "pt1", 00:14:56.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.430 "is_configured": true, 00:14:56.430 "data_offset": 2048, 00:14:56.430 "data_size": 63488 00:14:56.430 }, 00:14:56.430 { 00:14:56.430 "name": "pt2", 00:14:56.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.430 "is_configured": true, 00:14:56.430 "data_offset": 2048, 00:14:56.430 "data_size": 63488 00:14:56.430 }, 00:14:56.430 { 00:14:56.430 "name": "pt3", 00:14:56.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.430 "is_configured": true, 00:14:56.430 "data_offset": 2048, 00:14:56.430 "data_size": 63488 00:14:56.430 }, 00:14:56.430 { 00:14:56.430 "name": "pt4", 00:14:56.430 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.430 "is_configured": true, 00:14:56.430 "data_offset": 2048, 00:14:56.430 "data_size": 63488 00:14:56.430 } 00:14:56.430 ] 00:14:56.430 }' 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.430 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.000 [2024-11-26 22:59:35.836225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.000 "name": "raid_bdev1", 00:14:57.000 "aliases": [ 00:14:57.000 "41defe35-7c6a-4613-a14f-984146be2f25" 00:14:57.000 ], 00:14:57.000 "product_name": "Raid Volume", 00:14:57.000 "block_size": 512, 00:14:57.000 "num_blocks": 190464, 00:14:57.000 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:57.000 "assigned_rate_limits": { 00:14:57.000 "rw_ios_per_sec": 0, 00:14:57.000 "rw_mbytes_per_sec": 0, 00:14:57.000 "r_mbytes_per_sec": 0, 00:14:57.000 "w_mbytes_per_sec": 0 00:14:57.000 }, 00:14:57.000 "claimed": false, 00:14:57.000 "zoned": false, 00:14:57.000 "supported_io_types": { 00:14:57.000 "read": true, 00:14:57.000 "write": true, 00:14:57.000 "unmap": false, 00:14:57.000 "flush": false, 00:14:57.000 "reset": true, 00:14:57.000 "nvme_admin": false, 00:14:57.000 "nvme_io": false, 00:14:57.000 "nvme_io_md": false, 00:14:57.000 "write_zeroes": true, 00:14:57.000 "zcopy": false, 00:14:57.000 "get_zone_info": false, 00:14:57.000 "zone_management": false, 00:14:57.000 "zone_append": false, 00:14:57.000 "compare": false, 00:14:57.000 "compare_and_write": false, 00:14:57.000 "abort": false, 00:14:57.000 "seek_hole": false, 00:14:57.000 "seek_data": false, 00:14:57.000 "copy": false, 00:14:57.000 "nvme_iov_md": false 00:14:57.000 }, 00:14:57.000 "driver_specific": { 00:14:57.000 "raid": { 00:14:57.000 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:57.000 "strip_size_kb": 64, 00:14:57.000 "state": "online", 00:14:57.000 "raid_level": "raid5f", 00:14:57.000 "superblock": true, 00:14:57.000 "num_base_bdevs": 4, 00:14:57.000 "num_base_bdevs_discovered": 4, 00:14:57.000 "num_base_bdevs_operational": 4, 00:14:57.000 "base_bdevs_list": [ 00:14:57.000 { 00:14:57.000 "name": "pt1", 00:14:57.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.000 "is_configured": true, 00:14:57.000 "data_offset": 2048, 00:14:57.000 "data_size": 63488 00:14:57.000 }, 00:14:57.000 { 00:14:57.000 "name": "pt2", 00:14:57.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.000 "is_configured": true, 00:14:57.000 "data_offset": 2048, 00:14:57.000 "data_size": 63488 00:14:57.000 }, 00:14:57.000 { 00:14:57.000 "name": "pt3", 00:14:57.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.000 "is_configured": true, 00:14:57.000 "data_offset": 2048, 00:14:57.000 "data_size": 63488 00:14:57.000 }, 00:14:57.000 { 00:14:57.000 "name": "pt4", 00:14:57.000 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.000 "is_configured": true, 00:14:57.000 "data_offset": 2048, 00:14:57.000 "data_size": 63488 00:14:57.000 } 00:14:57.000 ] 00:14:57.000 } 00:14:57.000 } 00:14:57.000 }' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.000 pt2 00:14:57.000 pt3 00:14:57.000 pt4' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 22:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.000 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.000 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.000 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.001 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:57.261 [2024-11-26 22:59:36.160307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 41defe35-7c6a-4613-a14f-984146be2f25 '!=' 41defe35-7c6a-4613-a14f-984146be2f25 ']' 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.261 [2024-11-26 22:59:36.204211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.261 "name": "raid_bdev1", 00:14:57.261 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:57.261 "strip_size_kb": 64, 00:14:57.261 "state": "online", 00:14:57.261 "raid_level": "raid5f", 00:14:57.261 "superblock": true, 00:14:57.261 "num_base_bdevs": 4, 00:14:57.261 "num_base_bdevs_discovered": 3, 00:14:57.261 "num_base_bdevs_operational": 3, 00:14:57.261 "base_bdevs_list": [ 00:14:57.261 { 00:14:57.261 "name": null, 00:14:57.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.261 "is_configured": false, 00:14:57.261 "data_offset": 0, 00:14:57.261 "data_size": 63488 00:14:57.261 }, 00:14:57.261 { 00:14:57.261 "name": "pt2", 00:14:57.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.261 "is_configured": true, 00:14:57.261 "data_offset": 2048, 00:14:57.261 "data_size": 63488 00:14:57.261 }, 00:14:57.261 { 00:14:57.261 "name": "pt3", 00:14:57.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.261 "is_configured": true, 00:14:57.261 "data_offset": 2048, 00:14:57.261 "data_size": 63488 00:14:57.261 }, 00:14:57.261 { 00:14:57.261 "name": "pt4", 00:14:57.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.261 "is_configured": true, 00:14:57.261 "data_offset": 2048, 00:14:57.261 "data_size": 63488 00:14:57.261 } 00:14:57.261 ] 00:14:57.261 }' 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.261 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 [2024-11-26 22:59:36.656303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.832 [2024-11-26 22:59:36.656378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.832 [2024-11-26 22:59:36.656461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.832 [2024-11-26 22:59:36.656540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.832 [2024-11-26 22:59:36.656576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 [2024-11-26 22:59:36.756317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.832 [2024-11-26 22:59:36.756364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.832 [2024-11-26 22:59:36.756380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:57.832 [2024-11-26 22:59:36.756388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.832 [2024-11-26 22:59:36.758382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.832 [2024-11-26 22:59:36.758449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.832 [2024-11-26 22:59:36.758536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:57.832 [2024-11-26 22:59:36.758584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.832 pt2 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.832 "name": "raid_bdev1", 00:14:57.832 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:57.832 "strip_size_kb": 64, 00:14:57.832 "state": "configuring", 00:14:57.832 "raid_level": "raid5f", 00:14:57.832 "superblock": true, 00:14:57.832 "num_base_bdevs": 4, 00:14:57.832 "num_base_bdevs_discovered": 1, 00:14:57.832 "num_base_bdevs_operational": 3, 00:14:57.832 "base_bdevs_list": [ 00:14:57.832 { 00:14:57.832 "name": null, 00:14:57.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.832 "is_configured": false, 00:14:57.832 "data_offset": 2048, 00:14:57.832 "data_size": 63488 00:14:57.832 }, 00:14:57.832 { 00:14:57.832 "name": "pt2", 00:14:57.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.832 "is_configured": true, 00:14:57.832 "data_offset": 2048, 00:14:57.832 "data_size": 63488 00:14:57.832 }, 00:14:57.832 { 00:14:57.832 "name": null, 00:14:57.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.832 "is_configured": false, 00:14:57.832 "data_offset": 2048, 00:14:57.832 "data_size": 63488 00:14:57.832 }, 00:14:57.832 { 00:14:57.832 "name": null, 00:14:57.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.832 "is_configured": false, 00:14:57.832 "data_offset": 2048, 00:14:57.832 "data_size": 63488 00:14:57.832 } 00:14:57.832 ] 00:14:57.832 }' 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.832 22:59:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.402 [2024-11-26 22:59:37.228462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.402 [2024-11-26 22:59:37.228598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.402 [2024-11-26 22:59:37.228627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:58.402 [2024-11-26 22:59:37.228635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.402 [2024-11-26 22:59:37.228936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.402 [2024-11-26 22:59:37.228951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.402 [2024-11-26 22:59:37.229011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.402 [2024-11-26 22:59:37.229030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.402 pt3 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.402 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.402 "name": "raid_bdev1", 00:14:58.402 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:58.402 "strip_size_kb": 64, 00:14:58.402 "state": "configuring", 00:14:58.402 "raid_level": "raid5f", 00:14:58.402 "superblock": true, 00:14:58.402 "num_base_bdevs": 4, 00:14:58.402 "num_base_bdevs_discovered": 2, 00:14:58.402 "num_base_bdevs_operational": 3, 00:14:58.402 "base_bdevs_list": [ 00:14:58.402 { 00:14:58.402 "name": null, 00:14:58.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.403 "is_configured": false, 00:14:58.403 "data_offset": 2048, 00:14:58.403 "data_size": 63488 00:14:58.403 }, 00:14:58.403 { 00:14:58.403 "name": "pt2", 00:14:58.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.403 "is_configured": true, 00:14:58.403 "data_offset": 2048, 00:14:58.403 "data_size": 63488 00:14:58.403 }, 00:14:58.403 { 00:14:58.403 "name": "pt3", 00:14:58.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.403 "is_configured": true, 00:14:58.403 "data_offset": 2048, 00:14:58.403 "data_size": 63488 00:14:58.403 }, 00:14:58.403 { 00:14:58.403 "name": null, 00:14:58.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.403 "is_configured": false, 00:14:58.403 "data_offset": 2048, 00:14:58.403 "data_size": 63488 00:14:58.403 } 00:14:58.403 ] 00:14:58.403 }' 00:14:58.403 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.403 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.663 [2024-11-26 22:59:37.712599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:58.663 [2024-11-26 22:59:37.712701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.663 [2024-11-26 22:59:37.712735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:58.663 [2024-11-26 22:59:37.712761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.663 [2024-11-26 22:59:37.713114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.663 [2024-11-26 22:59:37.713172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:58.663 [2024-11-26 22:59:37.713282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:58.663 [2024-11-26 22:59:37.713331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:58.663 [2024-11-26 22:59:37.713443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:58.663 [2024-11-26 22:59:37.713477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.663 [2024-11-26 22:59:37.713696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:58.663 [2024-11-26 22:59:37.714198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:58.663 [2024-11-26 22:59:37.714258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:58.663 [2024-11-26 22:59:37.714507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.663 pt4 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.663 "name": "raid_bdev1", 00:14:58.663 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:58.663 "strip_size_kb": 64, 00:14:58.663 "state": "online", 00:14:58.663 "raid_level": "raid5f", 00:14:58.663 "superblock": true, 00:14:58.663 "num_base_bdevs": 4, 00:14:58.663 "num_base_bdevs_discovered": 3, 00:14:58.663 "num_base_bdevs_operational": 3, 00:14:58.663 "base_bdevs_list": [ 00:14:58.663 { 00:14:58.663 "name": null, 00:14:58.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.663 "is_configured": false, 00:14:58.663 "data_offset": 2048, 00:14:58.663 "data_size": 63488 00:14:58.663 }, 00:14:58.663 { 00:14:58.663 "name": "pt2", 00:14:58.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.663 "is_configured": true, 00:14:58.663 "data_offset": 2048, 00:14:58.663 "data_size": 63488 00:14:58.663 }, 00:14:58.663 { 00:14:58.663 "name": "pt3", 00:14:58.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.663 "is_configured": true, 00:14:58.663 "data_offset": 2048, 00:14:58.663 "data_size": 63488 00:14:58.663 }, 00:14:58.663 { 00:14:58.663 "name": "pt4", 00:14:58.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.663 "is_configured": true, 00:14:58.663 "data_offset": 2048, 00:14:58.663 "data_size": 63488 00:14:58.663 } 00:14:58.663 ] 00:14:58.663 }' 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.663 22:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 [2024-11-26 22:59:38.164687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.233 [2024-11-26 22:59:38.164759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.233 [2024-11-26 22:59:38.164816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.233 [2024-11-26 22:59:38.164875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.233 [2024-11-26 22:59:38.164885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 [2024-11-26 22:59:38.236720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.233 [2024-11-26 22:59:38.236775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.233 [2024-11-26 22:59:38.236790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:59.233 [2024-11-26 22:59:38.236800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.233 [2024-11-26 22:59:38.238824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.233 [2024-11-26 22:59:38.238905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.233 [2024-11-26 22:59:38.238969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:59.233 [2024-11-26 22:59:38.239004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.233 [2024-11-26 22:59:38.239095] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:59.233 [2024-11-26 22:59:38.239109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.233 [2024-11-26 22:59:38.239131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:14:59.233 [2024-11-26 22:59:38.239163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.233 [2024-11-26 22:59:38.239240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:59.233 pt1 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.233 "name": "raid_bdev1", 00:14:59.233 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:59.233 "strip_size_kb": 64, 00:14:59.233 "state": "configuring", 00:14:59.233 "raid_level": "raid5f", 00:14:59.233 "superblock": true, 00:14:59.233 "num_base_bdevs": 4, 00:14:59.233 "num_base_bdevs_discovered": 2, 00:14:59.233 "num_base_bdevs_operational": 3, 00:14:59.233 "base_bdevs_list": [ 00:14:59.233 { 00:14:59.233 "name": null, 00:14:59.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.233 "is_configured": false, 00:14:59.233 "data_offset": 2048, 00:14:59.233 "data_size": 63488 00:14:59.233 }, 00:14:59.233 { 00:14:59.233 "name": "pt2", 00:14:59.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.233 "is_configured": true, 00:14:59.233 "data_offset": 2048, 00:14:59.233 "data_size": 63488 00:14:59.233 }, 00:14:59.233 { 00:14:59.233 "name": "pt3", 00:14:59.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 2048, 00:14:59.234 "data_size": 63488 00:14:59.234 }, 00:14:59.234 { 00:14:59.234 "name": null, 00:14:59.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.234 "is_configured": false, 00:14:59.234 "data_offset": 2048, 00:14:59.234 "data_size": 63488 00:14:59.234 } 00:14:59.234 ] 00:14:59.234 }' 00:14:59.234 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.234 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 [2024-11-26 22:59:38.704824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:59.814 [2024-11-26 22:59:38.704874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.814 [2024-11-26 22:59:38.704892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:59.814 [2024-11-26 22:59:38.704900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.814 [2024-11-26 22:59:38.705224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.814 [2024-11-26 22:59:38.705257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:59.814 [2024-11-26 22:59:38.705318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:59.814 [2024-11-26 22:59:38.705340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:59.814 [2024-11-26 22:59:38.705427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:59.814 [2024-11-26 22:59:38.705439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:59.814 [2024-11-26 22:59:38.705657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:59.814 [2024-11-26 22:59:38.706151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:59.814 [2024-11-26 22:59:38.706167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:59.814 [2024-11-26 22:59:38.706336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.814 pt4 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.814 "name": "raid_bdev1", 00:14:59.814 "uuid": "41defe35-7c6a-4613-a14f-984146be2f25", 00:14:59.814 "strip_size_kb": 64, 00:14:59.814 "state": "online", 00:14:59.814 "raid_level": "raid5f", 00:14:59.814 "superblock": true, 00:14:59.814 "num_base_bdevs": 4, 00:14:59.814 "num_base_bdevs_discovered": 3, 00:14:59.814 "num_base_bdevs_operational": 3, 00:14:59.814 "base_bdevs_list": [ 00:14:59.814 { 00:14:59.814 "name": null, 00:14:59.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.814 "is_configured": false, 00:14:59.814 "data_offset": 2048, 00:14:59.814 "data_size": 63488 00:14:59.814 }, 00:14:59.814 { 00:14:59.814 "name": "pt2", 00:14:59.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.814 "is_configured": true, 00:14:59.814 "data_offset": 2048, 00:14:59.814 "data_size": 63488 00:14:59.814 }, 00:14:59.814 { 00:14:59.814 "name": "pt3", 00:14:59.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.814 "is_configured": true, 00:14:59.814 "data_offset": 2048, 00:14:59.814 "data_size": 63488 00:14:59.814 }, 00:14:59.814 { 00:14:59.814 "name": "pt4", 00:14:59.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.814 "is_configured": true, 00:14:59.814 "data_offset": 2048, 00:14:59.814 "data_size": 63488 00:14:59.814 } 00:14:59.814 ] 00:14:59.814 }' 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.814 22:59:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.074 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:00.074 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:00.074 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.074 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.074 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:00.334 [2024-11-26 22:59:39.221135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 41defe35-7c6a-4613-a14f-984146be2f25 '!=' 41defe35-7c6a-4613-a14f-984146be2f25 ']' 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96185 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96185 ']' 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96185 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96185 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.334 killing process with pid 96185 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96185' 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96185 00:15:00.334 [2024-11-26 22:59:39.309712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.334 [2024-11-26 22:59:39.309787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.334 [2024-11-26 22:59:39.309852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.334 [2024-11-26 22:59:39.309863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:00.334 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96185 00:15:00.334 [2024-11-26 22:59:39.352936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.594 22:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:00.594 00:15:00.594 real 0m7.171s 00:15:00.594 user 0m12.064s 00:15:00.594 sys 0m1.552s 00:15:00.594 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.594 ************************************ 00:15:00.594 END TEST raid5f_superblock_test 00:15:00.594 ************************************ 00:15:00.594 22:59:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.594 22:59:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:00.594 22:59:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:00.594 22:59:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:00.594 22:59:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.594 22:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.594 ************************************ 00:15:00.594 START TEST raid5f_rebuild_test 00:15:00.594 ************************************ 00:15:00.594 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:00.594 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96655 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96655 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 96655 ']' 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.595 22:59:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.855 [2024-11-26 22:59:39.758628] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:00.855 [2024-11-26 22:59:39.758829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96655 ] 00:15:00.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.855 Zero copy mechanism will not be used. 00:15:00.855 [2024-11-26 22:59:39.899331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.855 [2024-11-26 22:59:39.938075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.855 [2024-11-26 22:59:39.964682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.114 [2024-11-26 22:59:40.008240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.114 [2024-11-26 22:59:40.008303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.684 BaseBdev1_malloc 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.684 [2024-11-26 22:59:40.620880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.684 [2024-11-26 22:59:40.620960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.684 [2024-11-26 22:59:40.620985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.684 [2024-11-26 22:59:40.620998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.684 [2024-11-26 22:59:40.623017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.684 [2024-11-26 22:59:40.623142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.684 BaseBdev1 00:15:01.684 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 BaseBdev2_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-26 22:59:40.649393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.685 [2024-11-26 22:59:40.649520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.685 [2024-11-26 22:59:40.649540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.685 [2024-11-26 22:59:40.649550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.685 [2024-11-26 22:59:40.651500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.685 [2024-11-26 22:59:40.651537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.685 BaseBdev2 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 BaseBdev3_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-26 22:59:40.677870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.685 [2024-11-26 22:59:40.677925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.685 [2024-11-26 22:59:40.677943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.685 [2024-11-26 22:59:40.677953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.685 [2024-11-26 22:59:40.679891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.685 [2024-11-26 22:59:40.679931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.685 BaseBdev3 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 BaseBdev4_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-26 22:59:40.714404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:01.685 [2024-11-26 22:59:40.714460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.685 [2024-11-26 22:59:40.714478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:01.685 [2024-11-26 22:59:40.714487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.685 [2024-11-26 22:59:40.716487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.685 [2024-11-26 22:59:40.716575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:01.685 BaseBdev4 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 spare_malloc 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 spare_delay 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-26 22:59:40.754901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.685 [2024-11-26 22:59:40.754950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.685 [2024-11-26 22:59:40.754965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:01.685 [2024-11-26 22:59:40.754974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.685 [2024-11-26 22:59:40.756870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.685 [2024-11-26 22:59:40.756910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.685 spare 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-26 22:59:40.766972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.685 [2024-11-26 22:59:40.768674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.685 [2024-11-26 22:59:40.768731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.685 [2024-11-26 22:59:40.768768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:01.685 [2024-11-26 22:59:40.768841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:01.685 [2024-11-26 22:59:40.768852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:01.685 [2024-11-26 22:59:40.769083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.685 [2024-11-26 22:59:40.769515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:01.685 [2024-11-26 22:59:40.769527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:01.685 [2024-11-26 22:59:40.769652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.945 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.945 "name": "raid_bdev1", 00:15:01.945 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:01.945 "strip_size_kb": 64, 00:15:01.945 "state": "online", 00:15:01.945 "raid_level": "raid5f", 00:15:01.945 "superblock": false, 00:15:01.945 "num_base_bdevs": 4, 00:15:01.945 "num_base_bdevs_discovered": 4, 00:15:01.945 "num_base_bdevs_operational": 4, 00:15:01.945 "base_bdevs_list": [ 00:15:01.945 { 00:15:01.945 "name": "BaseBdev1", 00:15:01.945 "uuid": "91bc6538-c7cb-5e75-acbc-e4801f3888e8", 00:15:01.945 "is_configured": true, 00:15:01.945 "data_offset": 0, 00:15:01.945 "data_size": 65536 00:15:01.945 }, 00:15:01.945 { 00:15:01.945 "name": "BaseBdev2", 00:15:01.945 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:01.945 "is_configured": true, 00:15:01.945 "data_offset": 0, 00:15:01.945 "data_size": 65536 00:15:01.945 }, 00:15:01.945 { 00:15:01.945 "name": "BaseBdev3", 00:15:01.945 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:01.945 "is_configured": true, 00:15:01.945 "data_offset": 0, 00:15:01.945 "data_size": 65536 00:15:01.945 }, 00:15:01.945 { 00:15:01.945 "name": "BaseBdev4", 00:15:01.945 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:01.945 "is_configured": true, 00:15:01.945 "data_offset": 0, 00:15:01.945 "data_size": 65536 00:15:01.945 } 00:15:01.945 ] 00:15:01.945 }' 00:15:01.945 22:59:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.945 22:59:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 [2024-11-26 22:59:41.215611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.204 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.463 [2024-11-26 22:59:41.487547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:02.463 /dev/nbd0 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.463 1+0 records in 00:15:02.463 1+0 records out 00:15:02.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505557 s, 8.1 MB/s 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:02.463 22:59:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:03.033 512+0 records in 00:15:03.033 512+0 records out 00:15:03.033 100663296 bytes (101 MB, 96 MiB) copied, 0.580096 s, 174 MB/s 00:15:03.033 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.033 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.033 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.033 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.293 [2024-11-26 22:59:42.360573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.293 [2024-11-26 22:59:42.372680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.293 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.553 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.553 "name": "raid_bdev1", 00:15:03.553 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:03.553 "strip_size_kb": 64, 00:15:03.553 "state": "online", 00:15:03.553 "raid_level": "raid5f", 00:15:03.553 "superblock": false, 00:15:03.553 "num_base_bdevs": 4, 00:15:03.553 "num_base_bdevs_discovered": 3, 00:15:03.553 "num_base_bdevs_operational": 3, 00:15:03.553 "base_bdevs_list": [ 00:15:03.553 { 00:15:03.553 "name": null, 00:15:03.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.553 "is_configured": false, 00:15:03.553 "data_offset": 0, 00:15:03.553 "data_size": 65536 00:15:03.553 }, 00:15:03.553 { 00:15:03.553 "name": "BaseBdev2", 00:15:03.553 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:03.553 "is_configured": true, 00:15:03.553 "data_offset": 0, 00:15:03.553 "data_size": 65536 00:15:03.553 }, 00:15:03.553 { 00:15:03.553 "name": "BaseBdev3", 00:15:03.553 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:03.553 "is_configured": true, 00:15:03.553 "data_offset": 0, 00:15:03.553 "data_size": 65536 00:15:03.553 }, 00:15:03.553 { 00:15:03.553 "name": "BaseBdev4", 00:15:03.553 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:03.553 "is_configured": true, 00:15:03.553 "data_offset": 0, 00:15:03.553 "data_size": 65536 00:15:03.553 } 00:15:03.553 ] 00:15:03.553 }' 00:15:03.553 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.553 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.812 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.812 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 [2024-11-26 22:59:42.816760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.812 [2024-11-26 22:59:42.820973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:15:03.812 22:59:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.813 22:59:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:03.813 [2024-11-26 22:59:42.823098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.768 "name": "raid_bdev1", 00:15:04.768 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:04.768 "strip_size_kb": 64, 00:15:04.768 "state": "online", 00:15:04.768 "raid_level": "raid5f", 00:15:04.768 "superblock": false, 00:15:04.768 "num_base_bdevs": 4, 00:15:04.768 "num_base_bdevs_discovered": 4, 00:15:04.768 "num_base_bdevs_operational": 4, 00:15:04.768 "process": { 00:15:04.768 "type": "rebuild", 00:15:04.768 "target": "spare", 00:15:04.768 "progress": { 00:15:04.768 "blocks": 19200, 00:15:04.768 "percent": 9 00:15:04.768 } 00:15:04.768 }, 00:15:04.768 "base_bdevs_list": [ 00:15:04.768 { 00:15:04.768 "name": "spare", 00:15:04.768 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:04.768 "is_configured": true, 00:15:04.768 "data_offset": 0, 00:15:04.768 "data_size": 65536 00:15:04.768 }, 00:15:04.768 { 00:15:04.768 "name": "BaseBdev2", 00:15:04.768 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:04.768 "is_configured": true, 00:15:04.768 "data_offset": 0, 00:15:04.768 "data_size": 65536 00:15:04.768 }, 00:15:04.768 { 00:15:04.768 "name": "BaseBdev3", 00:15:04.768 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:04.768 "is_configured": true, 00:15:04.768 "data_offset": 0, 00:15:04.768 "data_size": 65536 00:15:04.768 }, 00:15:04.768 { 00:15:04.768 "name": "BaseBdev4", 00:15:04.768 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:04.768 "is_configured": true, 00:15:04.768 "data_offset": 0, 00:15:04.768 "data_size": 65536 00:15:04.768 } 00:15:04.768 ] 00:15:04.768 }' 00:15:04.768 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.028 22:59:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.028 [2024-11-26 22:59:43.961973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.028 [2024-11-26 22:59:44.030643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.028 [2024-11-26 22:59:44.030761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.028 [2024-11-26 22:59:44.030780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.028 [2024-11-26 22:59:44.030796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.028 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.029 "name": "raid_bdev1", 00:15:05.029 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:05.029 "strip_size_kb": 64, 00:15:05.029 "state": "online", 00:15:05.029 "raid_level": "raid5f", 00:15:05.029 "superblock": false, 00:15:05.029 "num_base_bdevs": 4, 00:15:05.029 "num_base_bdevs_discovered": 3, 00:15:05.029 "num_base_bdevs_operational": 3, 00:15:05.029 "base_bdevs_list": [ 00:15:05.029 { 00:15:05.029 "name": null, 00:15:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.029 "is_configured": false, 00:15:05.029 "data_offset": 0, 00:15:05.029 "data_size": 65536 00:15:05.029 }, 00:15:05.029 { 00:15:05.029 "name": "BaseBdev2", 00:15:05.029 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:05.029 "is_configured": true, 00:15:05.029 "data_offset": 0, 00:15:05.029 "data_size": 65536 00:15:05.029 }, 00:15:05.029 { 00:15:05.029 "name": "BaseBdev3", 00:15:05.029 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:05.029 "is_configured": true, 00:15:05.029 "data_offset": 0, 00:15:05.029 "data_size": 65536 00:15:05.029 }, 00:15:05.029 { 00:15:05.029 "name": "BaseBdev4", 00:15:05.029 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:05.029 "is_configured": true, 00:15:05.029 "data_offset": 0, 00:15:05.029 "data_size": 65536 00:15:05.029 } 00:15:05.029 ] 00:15:05.029 }' 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.029 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.598 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.598 "name": "raid_bdev1", 00:15:05.598 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:05.598 "strip_size_kb": 64, 00:15:05.598 "state": "online", 00:15:05.598 "raid_level": "raid5f", 00:15:05.598 "superblock": false, 00:15:05.598 "num_base_bdevs": 4, 00:15:05.598 "num_base_bdevs_discovered": 3, 00:15:05.598 "num_base_bdevs_operational": 3, 00:15:05.598 "base_bdevs_list": [ 00:15:05.598 { 00:15:05.598 "name": null, 00:15:05.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.598 "is_configured": false, 00:15:05.599 "data_offset": 0, 00:15:05.599 "data_size": 65536 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "name": "BaseBdev2", 00:15:05.599 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:05.599 "is_configured": true, 00:15:05.599 "data_offset": 0, 00:15:05.599 "data_size": 65536 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "name": "BaseBdev3", 00:15:05.599 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:05.599 "is_configured": true, 00:15:05.599 "data_offset": 0, 00:15:05.599 "data_size": 65536 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "name": "BaseBdev4", 00:15:05.599 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:05.599 "is_configured": true, 00:15:05.599 "data_offset": 0, 00:15:05.599 "data_size": 65536 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 }' 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.599 [2024-11-26 22:59:44.608469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.599 [2024-11-26 22:59:44.611943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:15:05.599 [2024-11-26 22:59:44.614011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.599 22:59:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.538 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.798 "name": "raid_bdev1", 00:15:06.798 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:06.798 "strip_size_kb": 64, 00:15:06.798 "state": "online", 00:15:06.798 "raid_level": "raid5f", 00:15:06.798 "superblock": false, 00:15:06.798 "num_base_bdevs": 4, 00:15:06.798 "num_base_bdevs_discovered": 4, 00:15:06.798 "num_base_bdevs_operational": 4, 00:15:06.798 "process": { 00:15:06.798 "type": "rebuild", 00:15:06.798 "target": "spare", 00:15:06.798 "progress": { 00:15:06.798 "blocks": 19200, 00:15:06.798 "percent": 9 00:15:06.798 } 00:15:06.798 }, 00:15:06.798 "base_bdevs_list": [ 00:15:06.798 { 00:15:06.798 "name": "spare", 00:15:06.798 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:06.798 "is_configured": true, 00:15:06.798 "data_offset": 0, 00:15:06.798 "data_size": 65536 00:15:06.798 }, 00:15:06.798 { 00:15:06.798 "name": "BaseBdev2", 00:15:06.798 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:06.798 "is_configured": true, 00:15:06.798 "data_offset": 0, 00:15:06.798 "data_size": 65536 00:15:06.798 }, 00:15:06.798 { 00:15:06.798 "name": "BaseBdev3", 00:15:06.798 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:06.798 "is_configured": true, 00:15:06.798 "data_offset": 0, 00:15:06.798 "data_size": 65536 00:15:06.798 }, 00:15:06.798 { 00:15:06.798 "name": "BaseBdev4", 00:15:06.798 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:06.798 "is_configured": true, 00:15:06.798 "data_offset": 0, 00:15:06.798 "data_size": 65536 00:15:06.798 } 00:15:06.798 ] 00:15:06.798 }' 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=514 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.798 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.799 "name": "raid_bdev1", 00:15:06.799 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:06.799 "strip_size_kb": 64, 00:15:06.799 "state": "online", 00:15:06.799 "raid_level": "raid5f", 00:15:06.799 "superblock": false, 00:15:06.799 "num_base_bdevs": 4, 00:15:06.799 "num_base_bdevs_discovered": 4, 00:15:06.799 "num_base_bdevs_operational": 4, 00:15:06.799 "process": { 00:15:06.799 "type": "rebuild", 00:15:06.799 "target": "spare", 00:15:06.799 "progress": { 00:15:06.799 "blocks": 21120, 00:15:06.799 "percent": 10 00:15:06.799 } 00:15:06.799 }, 00:15:06.799 "base_bdevs_list": [ 00:15:06.799 { 00:15:06.799 "name": "spare", 00:15:06.799 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:06.799 "is_configured": true, 00:15:06.799 "data_offset": 0, 00:15:06.799 "data_size": 65536 00:15:06.799 }, 00:15:06.799 { 00:15:06.799 "name": "BaseBdev2", 00:15:06.799 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:06.799 "is_configured": true, 00:15:06.799 "data_offset": 0, 00:15:06.799 "data_size": 65536 00:15:06.799 }, 00:15:06.799 { 00:15:06.799 "name": "BaseBdev3", 00:15:06.799 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:06.799 "is_configured": true, 00:15:06.799 "data_offset": 0, 00:15:06.799 "data_size": 65536 00:15:06.799 }, 00:15:06.799 { 00:15:06.799 "name": "BaseBdev4", 00:15:06.799 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:06.799 "is_configured": true, 00:15:06.799 "data_offset": 0, 00:15:06.799 "data_size": 65536 00:15:06.799 } 00:15:06.799 ] 00:15:06.799 }' 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.799 22:59:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.181 "name": "raid_bdev1", 00:15:08.181 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:08.181 "strip_size_kb": 64, 00:15:08.181 "state": "online", 00:15:08.181 "raid_level": "raid5f", 00:15:08.181 "superblock": false, 00:15:08.181 "num_base_bdevs": 4, 00:15:08.181 "num_base_bdevs_discovered": 4, 00:15:08.181 "num_base_bdevs_operational": 4, 00:15:08.181 "process": { 00:15:08.181 "type": "rebuild", 00:15:08.181 "target": "spare", 00:15:08.181 "progress": { 00:15:08.181 "blocks": 44160, 00:15:08.181 "percent": 22 00:15:08.181 } 00:15:08.181 }, 00:15:08.181 "base_bdevs_list": [ 00:15:08.181 { 00:15:08.181 "name": "spare", 00:15:08.181 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:08.181 "is_configured": true, 00:15:08.181 "data_offset": 0, 00:15:08.181 "data_size": 65536 00:15:08.181 }, 00:15:08.181 { 00:15:08.181 "name": "BaseBdev2", 00:15:08.181 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:08.181 "is_configured": true, 00:15:08.181 "data_offset": 0, 00:15:08.181 "data_size": 65536 00:15:08.181 }, 00:15:08.181 { 00:15:08.181 "name": "BaseBdev3", 00:15:08.181 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:08.181 "is_configured": true, 00:15:08.181 "data_offset": 0, 00:15:08.181 "data_size": 65536 00:15:08.181 }, 00:15:08.181 { 00:15:08.181 "name": "BaseBdev4", 00:15:08.181 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:08.181 "is_configured": true, 00:15:08.181 "data_offset": 0, 00:15:08.181 "data_size": 65536 00:15:08.181 } 00:15:08.181 ] 00:15:08.181 }' 00:15:08.181 22:59:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.181 22:59:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.181 22:59:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.181 22:59:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.181 22:59:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.120 22:59:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.121 "name": "raid_bdev1", 00:15:09.121 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:09.121 "strip_size_kb": 64, 00:15:09.121 "state": "online", 00:15:09.121 "raid_level": "raid5f", 00:15:09.121 "superblock": false, 00:15:09.121 "num_base_bdevs": 4, 00:15:09.121 "num_base_bdevs_discovered": 4, 00:15:09.121 "num_base_bdevs_operational": 4, 00:15:09.121 "process": { 00:15:09.121 "type": "rebuild", 00:15:09.121 "target": "spare", 00:15:09.121 "progress": { 00:15:09.121 "blocks": 65280, 00:15:09.121 "percent": 33 00:15:09.121 } 00:15:09.121 }, 00:15:09.121 "base_bdevs_list": [ 00:15:09.121 { 00:15:09.121 "name": "spare", 00:15:09.121 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:09.121 "is_configured": true, 00:15:09.121 "data_offset": 0, 00:15:09.121 "data_size": 65536 00:15:09.121 }, 00:15:09.121 { 00:15:09.121 "name": "BaseBdev2", 00:15:09.121 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:09.121 "is_configured": true, 00:15:09.121 "data_offset": 0, 00:15:09.121 "data_size": 65536 00:15:09.121 }, 00:15:09.121 { 00:15:09.121 "name": "BaseBdev3", 00:15:09.121 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:09.121 "is_configured": true, 00:15:09.121 "data_offset": 0, 00:15:09.121 "data_size": 65536 00:15:09.121 }, 00:15:09.121 { 00:15:09.121 "name": "BaseBdev4", 00:15:09.121 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:09.121 "is_configured": true, 00:15:09.121 "data_offset": 0, 00:15:09.121 "data_size": 65536 00:15:09.121 } 00:15:09.121 ] 00:15:09.121 }' 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.121 22:59:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.501 "name": "raid_bdev1", 00:15:10.501 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:10.501 "strip_size_kb": 64, 00:15:10.501 "state": "online", 00:15:10.501 "raid_level": "raid5f", 00:15:10.501 "superblock": false, 00:15:10.501 "num_base_bdevs": 4, 00:15:10.501 "num_base_bdevs_discovered": 4, 00:15:10.501 "num_base_bdevs_operational": 4, 00:15:10.501 "process": { 00:15:10.501 "type": "rebuild", 00:15:10.501 "target": "spare", 00:15:10.501 "progress": { 00:15:10.501 "blocks": 86400, 00:15:10.501 "percent": 43 00:15:10.501 } 00:15:10.501 }, 00:15:10.501 "base_bdevs_list": [ 00:15:10.501 { 00:15:10.501 "name": "spare", 00:15:10.501 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:10.501 "is_configured": true, 00:15:10.501 "data_offset": 0, 00:15:10.501 "data_size": 65536 00:15:10.501 }, 00:15:10.501 { 00:15:10.501 "name": "BaseBdev2", 00:15:10.501 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:10.501 "is_configured": true, 00:15:10.501 "data_offset": 0, 00:15:10.501 "data_size": 65536 00:15:10.501 }, 00:15:10.501 { 00:15:10.501 "name": "BaseBdev3", 00:15:10.501 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:10.501 "is_configured": true, 00:15:10.501 "data_offset": 0, 00:15:10.501 "data_size": 65536 00:15:10.501 }, 00:15:10.501 { 00:15:10.501 "name": "BaseBdev4", 00:15:10.501 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:10.501 "is_configured": true, 00:15:10.501 "data_offset": 0, 00:15:10.501 "data_size": 65536 00:15:10.501 } 00:15:10.501 ] 00:15:10.501 }' 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.501 22:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.441 "name": "raid_bdev1", 00:15:11.441 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:11.441 "strip_size_kb": 64, 00:15:11.441 "state": "online", 00:15:11.441 "raid_level": "raid5f", 00:15:11.441 "superblock": false, 00:15:11.441 "num_base_bdevs": 4, 00:15:11.441 "num_base_bdevs_discovered": 4, 00:15:11.441 "num_base_bdevs_operational": 4, 00:15:11.441 "process": { 00:15:11.441 "type": "rebuild", 00:15:11.441 "target": "spare", 00:15:11.441 "progress": { 00:15:11.441 "blocks": 109440, 00:15:11.441 "percent": 55 00:15:11.441 } 00:15:11.441 }, 00:15:11.441 "base_bdevs_list": [ 00:15:11.441 { 00:15:11.441 "name": "spare", 00:15:11.441 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 0, 00:15:11.441 "data_size": 65536 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": "BaseBdev2", 00:15:11.441 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 0, 00:15:11.441 "data_size": 65536 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": "BaseBdev3", 00:15:11.441 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 0, 00:15:11.441 "data_size": 65536 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": "BaseBdev4", 00:15:11.441 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 0, 00:15:11.441 "data_size": 65536 00:15:11.441 } 00:15:11.441 ] 00:15:11.441 }' 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.441 22:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.381 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.641 "name": "raid_bdev1", 00:15:12.641 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:12.641 "strip_size_kb": 64, 00:15:12.641 "state": "online", 00:15:12.641 "raid_level": "raid5f", 00:15:12.641 "superblock": false, 00:15:12.641 "num_base_bdevs": 4, 00:15:12.641 "num_base_bdevs_discovered": 4, 00:15:12.641 "num_base_bdevs_operational": 4, 00:15:12.641 "process": { 00:15:12.641 "type": "rebuild", 00:15:12.641 "target": "spare", 00:15:12.641 "progress": { 00:15:12.641 "blocks": 130560, 00:15:12.641 "percent": 66 00:15:12.641 } 00:15:12.641 }, 00:15:12.641 "base_bdevs_list": [ 00:15:12.641 { 00:15:12.641 "name": "spare", 00:15:12.641 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:12.641 "is_configured": true, 00:15:12.641 "data_offset": 0, 00:15:12.641 "data_size": 65536 00:15:12.641 }, 00:15:12.641 { 00:15:12.641 "name": "BaseBdev2", 00:15:12.641 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:12.641 "is_configured": true, 00:15:12.641 "data_offset": 0, 00:15:12.641 "data_size": 65536 00:15:12.641 }, 00:15:12.641 { 00:15:12.641 "name": "BaseBdev3", 00:15:12.641 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:12.641 "is_configured": true, 00:15:12.641 "data_offset": 0, 00:15:12.641 "data_size": 65536 00:15:12.641 }, 00:15:12.641 { 00:15:12.641 "name": "BaseBdev4", 00:15:12.641 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:12.641 "is_configured": true, 00:15:12.641 "data_offset": 0, 00:15:12.641 "data_size": 65536 00:15:12.641 } 00:15:12.641 ] 00:15:12.641 }' 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.641 22:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.620 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.621 "name": "raid_bdev1", 00:15:13.621 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:13.621 "strip_size_kb": 64, 00:15:13.621 "state": "online", 00:15:13.621 "raid_level": "raid5f", 00:15:13.621 "superblock": false, 00:15:13.621 "num_base_bdevs": 4, 00:15:13.621 "num_base_bdevs_discovered": 4, 00:15:13.621 "num_base_bdevs_operational": 4, 00:15:13.621 "process": { 00:15:13.621 "type": "rebuild", 00:15:13.621 "target": "spare", 00:15:13.621 "progress": { 00:15:13.621 "blocks": 153600, 00:15:13.621 "percent": 78 00:15:13.621 } 00:15:13.621 }, 00:15:13.621 "base_bdevs_list": [ 00:15:13.621 { 00:15:13.621 "name": "spare", 00:15:13.621 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:13.621 "is_configured": true, 00:15:13.621 "data_offset": 0, 00:15:13.621 "data_size": 65536 00:15:13.621 }, 00:15:13.621 { 00:15:13.621 "name": "BaseBdev2", 00:15:13.621 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:13.621 "is_configured": true, 00:15:13.621 "data_offset": 0, 00:15:13.621 "data_size": 65536 00:15:13.621 }, 00:15:13.621 { 00:15:13.621 "name": "BaseBdev3", 00:15:13.621 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:13.621 "is_configured": true, 00:15:13.621 "data_offset": 0, 00:15:13.621 "data_size": 65536 00:15:13.621 }, 00:15:13.621 { 00:15:13.621 "name": "BaseBdev4", 00:15:13.621 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:13.621 "is_configured": true, 00:15:13.621 "data_offset": 0, 00:15:13.621 "data_size": 65536 00:15:13.621 } 00:15:13.621 ] 00:15:13.621 }' 00:15:13.621 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.894 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.894 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.894 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.894 22:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.833 "name": "raid_bdev1", 00:15:14.833 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:14.833 "strip_size_kb": 64, 00:15:14.833 "state": "online", 00:15:14.833 "raid_level": "raid5f", 00:15:14.833 "superblock": false, 00:15:14.833 "num_base_bdevs": 4, 00:15:14.833 "num_base_bdevs_discovered": 4, 00:15:14.833 "num_base_bdevs_operational": 4, 00:15:14.833 "process": { 00:15:14.833 "type": "rebuild", 00:15:14.833 "target": "spare", 00:15:14.833 "progress": { 00:15:14.833 "blocks": 174720, 00:15:14.833 "percent": 88 00:15:14.833 } 00:15:14.833 }, 00:15:14.833 "base_bdevs_list": [ 00:15:14.833 { 00:15:14.833 "name": "spare", 00:15:14.833 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:14.833 "is_configured": true, 00:15:14.833 "data_offset": 0, 00:15:14.833 "data_size": 65536 00:15:14.833 }, 00:15:14.833 { 00:15:14.833 "name": "BaseBdev2", 00:15:14.833 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:14.833 "is_configured": true, 00:15:14.833 "data_offset": 0, 00:15:14.833 "data_size": 65536 00:15:14.833 }, 00:15:14.833 { 00:15:14.833 "name": "BaseBdev3", 00:15:14.833 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:14.833 "is_configured": true, 00:15:14.833 "data_offset": 0, 00:15:14.833 "data_size": 65536 00:15:14.833 }, 00:15:14.833 { 00:15:14.833 "name": "BaseBdev4", 00:15:14.833 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:14.833 "is_configured": true, 00:15:14.833 "data_offset": 0, 00:15:14.833 "data_size": 65536 00:15:14.833 } 00:15:14.833 ] 00:15:14.833 }' 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.833 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.093 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.093 22:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.032 [2024-11-26 22:59:54.972089] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:16.032 [2024-11-26 22:59:54.972156] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:16.032 [2024-11-26 22:59:54.972197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.032 22:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.032 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.032 "name": "raid_bdev1", 00:15:16.032 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:16.032 "strip_size_kb": 64, 00:15:16.032 "state": "online", 00:15:16.032 "raid_level": "raid5f", 00:15:16.032 "superblock": false, 00:15:16.032 "num_base_bdevs": 4, 00:15:16.032 "num_base_bdevs_discovered": 4, 00:15:16.032 "num_base_bdevs_operational": 4, 00:15:16.032 "base_bdevs_list": [ 00:15:16.032 { 00:15:16.032 "name": "spare", 00:15:16.032 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:16.032 "is_configured": true, 00:15:16.032 "data_offset": 0, 00:15:16.032 "data_size": 65536 00:15:16.032 }, 00:15:16.032 { 00:15:16.032 "name": "BaseBdev2", 00:15:16.032 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:16.032 "is_configured": true, 00:15:16.032 "data_offset": 0, 00:15:16.032 "data_size": 65536 00:15:16.032 }, 00:15:16.032 { 00:15:16.032 "name": "BaseBdev3", 00:15:16.032 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:16.032 "is_configured": true, 00:15:16.032 "data_offset": 0, 00:15:16.032 "data_size": 65536 00:15:16.032 }, 00:15:16.032 { 00:15:16.032 "name": "BaseBdev4", 00:15:16.032 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:16.032 "is_configured": true, 00:15:16.032 "data_offset": 0, 00:15:16.032 "data_size": 65536 00:15:16.032 } 00:15:16.032 ] 00:15:16.032 }' 00:15:16.032 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.033 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.293 "name": "raid_bdev1", 00:15:16.293 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:16.293 "strip_size_kb": 64, 00:15:16.293 "state": "online", 00:15:16.293 "raid_level": "raid5f", 00:15:16.293 "superblock": false, 00:15:16.293 "num_base_bdevs": 4, 00:15:16.293 "num_base_bdevs_discovered": 4, 00:15:16.293 "num_base_bdevs_operational": 4, 00:15:16.293 "base_bdevs_list": [ 00:15:16.293 { 00:15:16.293 "name": "spare", 00:15:16.293 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev2", 00:15:16.293 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev3", 00:15:16.293 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev4", 00:15:16.293 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 } 00:15:16.293 ] 00:15:16.293 }' 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.293 "name": "raid_bdev1", 00:15:16.293 "uuid": "fc0da33f-0523-452b-b73d-5218aed71c3d", 00:15:16.293 "strip_size_kb": 64, 00:15:16.293 "state": "online", 00:15:16.293 "raid_level": "raid5f", 00:15:16.293 "superblock": false, 00:15:16.293 "num_base_bdevs": 4, 00:15:16.293 "num_base_bdevs_discovered": 4, 00:15:16.293 "num_base_bdevs_operational": 4, 00:15:16.293 "base_bdevs_list": [ 00:15:16.293 { 00:15:16.293 "name": "spare", 00:15:16.293 "uuid": "887268dd-c48d-5131-9e0f-1062501c5f4f", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev2", 00:15:16.293 "uuid": "8ef336ad-56cb-5878-a4e2-d5288fe98be7", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev3", 00:15:16.293 "uuid": "0fba179b-6ef2-5c05-a03f-57a3c2a88e30", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev4", 00:15:16.293 "uuid": "d48c67d0-188d-5e04-9972-906cfa83de7a", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 65536 00:15:16.293 } 00:15:16.293 ] 00:15:16.293 }' 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.293 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.552 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.552 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.552 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.552 [2024-11-26 22:59:55.673671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.552 [2024-11-26 22:59:55.673706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.552 [2024-11-26 22:59:55.673785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.552 [2024-11-26 22:59:55.673892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.552 [2024-11-26 22:59:55.673907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.812 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:16.812 /dev/nbd0 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.072 1+0 records in 00:15:17.072 1+0 records out 00:15:17.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535317 s, 7.7 MB/s 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.072 22:59:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.072 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:17.072 /dev/nbd1 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.332 1+0 records in 00:15:17.332 1+0 records out 00:15:17.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384027 s, 10.7 MB/s 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.332 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.590 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96655 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 96655 ']' 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 96655 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96655 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.849 killing process with pid 96655 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96655' 00:15:17.849 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 96655 00:15:17.849 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.849 00:15:17.849 Latency(us) 00:15:17.849 [2024-11-26T22:59:56.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.849 [2024-11-26T22:59:56.977Z] =================================================================================================================== 00:15:17.849 [2024-11-26T22:59:56.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.849 [2024-11-26 22:59:56.804782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.850 22:59:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 96655 00:15:17.850 [2024-11-26 22:59:56.854601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.109 22:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:18.109 00:15:18.109 real 0m17.411s 00:15:18.109 user 0m21.097s 00:15:18.109 sys 0m2.457s 00:15:18.109 22:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.110 ************************************ 00:15:18.110 END TEST raid5f_rebuild_test 00:15:18.110 ************************************ 00:15:18.110 22:59:57 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:18.110 22:59:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:18.110 22:59:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.110 22:59:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.110 ************************************ 00:15:18.110 START TEST raid5f_rebuild_test_sb 00:15:18.110 ************************************ 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97140 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97140 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97140 ']' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.110 22:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:18.369 Zero copy mechanism will not be used. 00:15:18.369 [2024-11-26 22:59:57.253052] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:18.369 [2024-11-26 22:59:57.253184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97140 ] 00:15:18.369 [2024-11-26 22:59:57.393526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:18.369 [2024-11-26 22:59:57.430485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.369 [2024-11-26 22:59:57.456816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.629 [2024-11-26 22:59:57.500803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.629 [2024-11-26 22:59:57.500845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 BaseBdev1_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.097343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.198 [2024-11-26 22:59:58.097413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.198 [2024-11-26 22:59:58.097444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:19.198 [2024-11-26 22:59:58.097464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.198 [2024-11-26 22:59:58.099477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.198 [2024-11-26 22:59:58.099534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.198 BaseBdev1 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 BaseBdev2_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.125824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:19.198 [2024-11-26 22:59:58.125878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.198 [2024-11-26 22:59:58.125895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.198 [2024-11-26 22:59:58.125905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.198 [2024-11-26 22:59:58.127862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.198 [2024-11-26 22:59:58.127902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.198 BaseBdev2 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 BaseBdev3_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.154307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:19.198 [2024-11-26 22:59:58.154357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.198 [2024-11-26 22:59:58.154377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.198 [2024-11-26 22:59:58.154387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.198 [2024-11-26 22:59:58.156323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.198 [2024-11-26 22:59:58.156361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.198 BaseBdev3 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 BaseBdev4_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.198285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:19.198 [2024-11-26 22:59:58.198400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.198 [2024-11-26 22:59:58.198442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:19.198 [2024-11-26 22:59:58.198465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.198 [2024-11-26 22:59:58.201931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.198 [2024-11-26 22:59:58.201986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:19.198 BaseBdev4 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 spare_malloc 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 spare_delay 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.239930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.198 [2024-11-26 22:59:58.239996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.198 [2024-11-26 22:59:58.240012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:19.198 [2024-11-26 22:59:58.240033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.198 [2024-11-26 22:59:58.241963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.198 [2024-11-26 22:59:58.242001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.198 spare 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-26 22:59:58.252011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.198 [2024-11-26 22:59:58.253740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.198 [2024-11-26 22:59:58.253802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.198 [2024-11-26 22:59:58.253842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.198 [2024-11-26 22:59:58.253993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:19.198 [2024-11-26 22:59:58.254012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:19.198 [2024-11-26 22:59:58.254255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:19.198 [2024-11-26 22:59:58.254702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:19.198 [2024-11-26 22:59:58.254721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:19.198 [2024-11-26 22:59:58.254823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.198 "name": "raid_bdev1", 00:15:19.198 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:19.198 "strip_size_kb": 64, 00:15:19.198 "state": "online", 00:15:19.198 "raid_level": "raid5f", 00:15:19.198 "superblock": true, 00:15:19.198 "num_base_bdevs": 4, 00:15:19.198 "num_base_bdevs_discovered": 4, 00:15:19.198 "num_base_bdevs_operational": 4, 00:15:19.198 "base_bdevs_list": [ 00:15:19.198 { 00:15:19.198 "name": "BaseBdev1", 00:15:19.198 "uuid": "eeb7e195-b0ce-5a6a-9d30-e27d3d0a7ed1", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 2048, 00:15:19.198 "data_size": 63488 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev2", 00:15:19.198 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 2048, 00:15:19.198 "data_size": 63488 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev3", 00:15:19.198 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 2048, 00:15:19.198 "data_size": 63488 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev4", 00:15:19.198 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 2048, 00:15:19.198 "data_size": 63488 00:15:19.198 } 00:15:19.198 ] 00:15:19.198 }' 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.198 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 [2024-11-26 22:59:58.704799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.767 22:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:20.027 [2024-11-26 22:59:58.960740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:20.027 /dev/nbd0 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.027 1+0 records in 00:15:20.027 1+0 records out 00:15:20.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508433 s, 8.1 MB/s 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:20.027 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:20.595 496+0 records in 00:15:20.595 496+0 records out 00:15:20.595 97517568 bytes (98 MB, 93 MiB) copied, 0.539726 s, 181 MB/s 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.595 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.854 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.854 [2024-11-26 22:59:59.789493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.854 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.854 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.854 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.855 [2024-11-26 22:59:59.811100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.855 "name": "raid_bdev1", 00:15:20.855 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:20.855 "strip_size_kb": 64, 00:15:20.855 "state": "online", 00:15:20.855 "raid_level": "raid5f", 00:15:20.855 "superblock": true, 00:15:20.855 "num_base_bdevs": 4, 00:15:20.855 "num_base_bdevs_discovered": 3, 00:15:20.855 "num_base_bdevs_operational": 3, 00:15:20.855 "base_bdevs_list": [ 00:15:20.855 { 00:15:20.855 "name": null, 00:15:20.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.855 "is_configured": false, 00:15:20.855 "data_offset": 0, 00:15:20.855 "data_size": 63488 00:15:20.855 }, 00:15:20.855 { 00:15:20.855 "name": "BaseBdev2", 00:15:20.855 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:20.855 "is_configured": true, 00:15:20.855 "data_offset": 2048, 00:15:20.855 "data_size": 63488 00:15:20.855 }, 00:15:20.855 { 00:15:20.855 "name": "BaseBdev3", 00:15:20.855 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:20.855 "is_configured": true, 00:15:20.855 "data_offset": 2048, 00:15:20.855 "data_size": 63488 00:15:20.855 }, 00:15:20.855 { 00:15:20.855 "name": "BaseBdev4", 00:15:20.855 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:20.855 "is_configured": true, 00:15:20.855 "data_offset": 2048, 00:15:20.855 "data_size": 63488 00:15:20.855 } 00:15:20.855 ] 00:15:20.855 }' 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.855 22:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.422 23:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.423 23:00:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.423 23:00:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.423 [2024-11-26 23:00:00.271202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.423 [2024-11-26 23:00:00.275349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:15:21.423 23:00:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.423 23:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.423 [2024-11-26 23:00:00.277468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.359 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.360 "name": "raid_bdev1", 00:15:22.360 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:22.360 "strip_size_kb": 64, 00:15:22.360 "state": "online", 00:15:22.360 "raid_level": "raid5f", 00:15:22.360 "superblock": true, 00:15:22.360 "num_base_bdevs": 4, 00:15:22.360 "num_base_bdevs_discovered": 4, 00:15:22.360 "num_base_bdevs_operational": 4, 00:15:22.360 "process": { 00:15:22.360 "type": "rebuild", 00:15:22.360 "target": "spare", 00:15:22.360 "progress": { 00:15:22.360 "blocks": 19200, 00:15:22.360 "percent": 10 00:15:22.360 } 00:15:22.360 }, 00:15:22.360 "base_bdevs_list": [ 00:15:22.360 { 00:15:22.360 "name": "spare", 00:15:22.360 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev2", 00:15:22.360 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev3", 00:15:22.360 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev4", 00:15:22.360 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 } 00:15:22.360 ] 00:15:22.360 }' 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.360 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.360 [2024-11-26 23:00:01.428160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.360 [2024-11-26 23:00:01.484805] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.360 [2024-11-26 23:00:01.484872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.360 [2024-11-26 23:00:01.484889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.360 [2024-11-26 23:00:01.484901] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.619 "name": "raid_bdev1", 00:15:22.619 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:22.619 "strip_size_kb": 64, 00:15:22.619 "state": "online", 00:15:22.619 "raid_level": "raid5f", 00:15:22.619 "superblock": true, 00:15:22.619 "num_base_bdevs": 4, 00:15:22.619 "num_base_bdevs_discovered": 3, 00:15:22.619 "num_base_bdevs_operational": 3, 00:15:22.619 "base_bdevs_list": [ 00:15:22.619 { 00:15:22.619 "name": null, 00:15:22.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.619 "is_configured": false, 00:15:22.619 "data_offset": 0, 00:15:22.619 "data_size": 63488 00:15:22.619 }, 00:15:22.619 { 00:15:22.619 "name": "BaseBdev2", 00:15:22.619 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:22.619 "is_configured": true, 00:15:22.619 "data_offset": 2048, 00:15:22.619 "data_size": 63488 00:15:22.619 }, 00:15:22.619 { 00:15:22.619 "name": "BaseBdev3", 00:15:22.619 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:22.619 "is_configured": true, 00:15:22.619 "data_offset": 2048, 00:15:22.619 "data_size": 63488 00:15:22.619 }, 00:15:22.619 { 00:15:22.619 "name": "BaseBdev4", 00:15:22.619 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:22.619 "is_configured": true, 00:15:22.619 "data_offset": 2048, 00:15:22.619 "data_size": 63488 00:15:22.619 } 00:15:22.619 ] 00:15:22.619 }' 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.619 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.879 "name": "raid_bdev1", 00:15:22.879 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:22.879 "strip_size_kb": 64, 00:15:22.879 "state": "online", 00:15:22.879 "raid_level": "raid5f", 00:15:22.879 "superblock": true, 00:15:22.879 "num_base_bdevs": 4, 00:15:22.879 "num_base_bdevs_discovered": 3, 00:15:22.879 "num_base_bdevs_operational": 3, 00:15:22.879 "base_bdevs_list": [ 00:15:22.879 { 00:15:22.879 "name": null, 00:15:22.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.879 "is_configured": false, 00:15:22.879 "data_offset": 0, 00:15:22.879 "data_size": 63488 00:15:22.879 }, 00:15:22.879 { 00:15:22.879 "name": "BaseBdev2", 00:15:22.879 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:22.879 "is_configured": true, 00:15:22.879 "data_offset": 2048, 00:15:22.879 "data_size": 63488 00:15:22.879 }, 00:15:22.879 { 00:15:22.879 "name": "BaseBdev3", 00:15:22.879 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:22.879 "is_configured": true, 00:15:22.879 "data_offset": 2048, 00:15:22.879 "data_size": 63488 00:15:22.879 }, 00:15:22.879 { 00:15:22.879 "name": "BaseBdev4", 00:15:22.879 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:22.879 "is_configured": true, 00:15:22.879 "data_offset": 2048, 00:15:22.879 "data_size": 63488 00:15:22.879 } 00:15:22.879 ] 00:15:22.879 }' 00:15:22.879 23:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.144 [2024-11-26 23:00:02.074544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.144 [2024-11-26 23:00:02.077812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:15:23.144 [2024-11-26 23:00:02.079930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.144 23:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.080 "name": "raid_bdev1", 00:15:24.080 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:24.080 "strip_size_kb": 64, 00:15:24.080 "state": "online", 00:15:24.080 "raid_level": "raid5f", 00:15:24.080 "superblock": true, 00:15:24.080 "num_base_bdevs": 4, 00:15:24.080 "num_base_bdevs_discovered": 4, 00:15:24.080 "num_base_bdevs_operational": 4, 00:15:24.080 "process": { 00:15:24.080 "type": "rebuild", 00:15:24.080 "target": "spare", 00:15:24.080 "progress": { 00:15:24.080 "blocks": 19200, 00:15:24.080 "percent": 10 00:15:24.080 } 00:15:24.080 }, 00:15:24.080 "base_bdevs_list": [ 00:15:24.080 { 00:15:24.080 "name": "spare", 00:15:24.080 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:24.080 "is_configured": true, 00:15:24.080 "data_offset": 2048, 00:15:24.080 "data_size": 63488 00:15:24.080 }, 00:15:24.080 { 00:15:24.080 "name": "BaseBdev2", 00:15:24.080 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:24.080 "is_configured": true, 00:15:24.080 "data_offset": 2048, 00:15:24.080 "data_size": 63488 00:15:24.080 }, 00:15:24.080 { 00:15:24.080 "name": "BaseBdev3", 00:15:24.080 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:24.080 "is_configured": true, 00:15:24.080 "data_offset": 2048, 00:15:24.080 "data_size": 63488 00:15:24.080 }, 00:15:24.080 { 00:15:24.080 "name": "BaseBdev4", 00:15:24.080 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:24.080 "is_configured": true, 00:15:24.080 "data_offset": 2048, 00:15:24.080 "data_size": 63488 00:15:24.080 } 00:15:24.080 ] 00:15:24.080 }' 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.080 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:24.337 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.337 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.338 "name": "raid_bdev1", 00:15:24.338 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:24.338 "strip_size_kb": 64, 00:15:24.338 "state": "online", 00:15:24.338 "raid_level": "raid5f", 00:15:24.338 "superblock": true, 00:15:24.338 "num_base_bdevs": 4, 00:15:24.338 "num_base_bdevs_discovered": 4, 00:15:24.338 "num_base_bdevs_operational": 4, 00:15:24.338 "process": { 00:15:24.338 "type": "rebuild", 00:15:24.338 "target": "spare", 00:15:24.338 "progress": { 00:15:24.338 "blocks": 21120, 00:15:24.338 "percent": 11 00:15:24.338 } 00:15:24.338 }, 00:15:24.338 "base_bdevs_list": [ 00:15:24.338 { 00:15:24.338 "name": "spare", 00:15:24.338 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:24.338 "is_configured": true, 00:15:24.338 "data_offset": 2048, 00:15:24.338 "data_size": 63488 00:15:24.338 }, 00:15:24.338 { 00:15:24.338 "name": "BaseBdev2", 00:15:24.338 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:24.338 "is_configured": true, 00:15:24.338 "data_offset": 2048, 00:15:24.338 "data_size": 63488 00:15:24.338 }, 00:15:24.338 { 00:15:24.338 "name": "BaseBdev3", 00:15:24.338 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:24.338 "is_configured": true, 00:15:24.338 "data_offset": 2048, 00:15:24.338 "data_size": 63488 00:15:24.338 }, 00:15:24.338 { 00:15:24.338 "name": "BaseBdev4", 00:15:24.338 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:24.338 "is_configured": true, 00:15:24.338 "data_offset": 2048, 00:15:24.338 "data_size": 63488 00:15:24.338 } 00:15:24.338 ] 00:15:24.338 }' 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.338 23:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.275 "name": "raid_bdev1", 00:15:25.275 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:25.275 "strip_size_kb": 64, 00:15:25.275 "state": "online", 00:15:25.275 "raid_level": "raid5f", 00:15:25.275 "superblock": true, 00:15:25.275 "num_base_bdevs": 4, 00:15:25.275 "num_base_bdevs_discovered": 4, 00:15:25.275 "num_base_bdevs_operational": 4, 00:15:25.275 "process": { 00:15:25.275 "type": "rebuild", 00:15:25.275 "target": "spare", 00:15:25.275 "progress": { 00:15:25.275 "blocks": 42240, 00:15:25.275 "percent": 22 00:15:25.275 } 00:15:25.275 }, 00:15:25.275 "base_bdevs_list": [ 00:15:25.275 { 00:15:25.275 "name": "spare", 00:15:25.275 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:25.275 "is_configured": true, 00:15:25.275 "data_offset": 2048, 00:15:25.275 "data_size": 63488 00:15:25.275 }, 00:15:25.275 { 00:15:25.275 "name": "BaseBdev2", 00:15:25.275 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:25.275 "is_configured": true, 00:15:25.275 "data_offset": 2048, 00:15:25.275 "data_size": 63488 00:15:25.275 }, 00:15:25.275 { 00:15:25.275 "name": "BaseBdev3", 00:15:25.275 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:25.275 "is_configured": true, 00:15:25.275 "data_offset": 2048, 00:15:25.275 "data_size": 63488 00:15:25.275 }, 00:15:25.275 { 00:15:25.275 "name": "BaseBdev4", 00:15:25.275 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:25.275 "is_configured": true, 00:15:25.275 "data_offset": 2048, 00:15:25.275 "data_size": 63488 00:15:25.275 } 00:15:25.275 ] 00:15:25.275 }' 00:15:25.275 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.534 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.534 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.534 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.534 23:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.471 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.471 "name": "raid_bdev1", 00:15:26.471 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:26.471 "strip_size_kb": 64, 00:15:26.471 "state": "online", 00:15:26.471 "raid_level": "raid5f", 00:15:26.471 "superblock": true, 00:15:26.471 "num_base_bdevs": 4, 00:15:26.471 "num_base_bdevs_discovered": 4, 00:15:26.471 "num_base_bdevs_operational": 4, 00:15:26.471 "process": { 00:15:26.471 "type": "rebuild", 00:15:26.471 "target": "spare", 00:15:26.471 "progress": { 00:15:26.471 "blocks": 65280, 00:15:26.471 "percent": 34 00:15:26.471 } 00:15:26.471 }, 00:15:26.471 "base_bdevs_list": [ 00:15:26.471 { 00:15:26.471 "name": "spare", 00:15:26.471 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:26.471 "is_configured": true, 00:15:26.471 "data_offset": 2048, 00:15:26.471 "data_size": 63488 00:15:26.471 }, 00:15:26.471 { 00:15:26.471 "name": "BaseBdev2", 00:15:26.471 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:26.471 "is_configured": true, 00:15:26.471 "data_offset": 2048, 00:15:26.471 "data_size": 63488 00:15:26.471 }, 00:15:26.471 { 00:15:26.472 "name": "BaseBdev3", 00:15:26.472 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:26.472 "is_configured": true, 00:15:26.472 "data_offset": 2048, 00:15:26.472 "data_size": 63488 00:15:26.472 }, 00:15:26.472 { 00:15:26.472 "name": "BaseBdev4", 00:15:26.472 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:26.472 "is_configured": true, 00:15:26.472 "data_offset": 2048, 00:15:26.472 "data_size": 63488 00:15:26.472 } 00:15:26.472 ] 00:15:26.472 }' 00:15:26.472 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.730 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.730 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.730 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.730 23:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.665 "name": "raid_bdev1", 00:15:27.665 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:27.665 "strip_size_kb": 64, 00:15:27.665 "state": "online", 00:15:27.665 "raid_level": "raid5f", 00:15:27.665 "superblock": true, 00:15:27.665 "num_base_bdevs": 4, 00:15:27.665 "num_base_bdevs_discovered": 4, 00:15:27.665 "num_base_bdevs_operational": 4, 00:15:27.665 "process": { 00:15:27.665 "type": "rebuild", 00:15:27.665 "target": "spare", 00:15:27.665 "progress": { 00:15:27.665 "blocks": 86400, 00:15:27.665 "percent": 45 00:15:27.665 } 00:15:27.665 }, 00:15:27.665 "base_bdevs_list": [ 00:15:27.665 { 00:15:27.665 "name": "spare", 00:15:27.665 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:27.665 "is_configured": true, 00:15:27.665 "data_offset": 2048, 00:15:27.665 "data_size": 63488 00:15:27.665 }, 00:15:27.665 { 00:15:27.665 "name": "BaseBdev2", 00:15:27.665 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:27.665 "is_configured": true, 00:15:27.665 "data_offset": 2048, 00:15:27.665 "data_size": 63488 00:15:27.665 }, 00:15:27.665 { 00:15:27.665 "name": "BaseBdev3", 00:15:27.665 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:27.665 "is_configured": true, 00:15:27.665 "data_offset": 2048, 00:15:27.665 "data_size": 63488 00:15:27.665 }, 00:15:27.665 { 00:15:27.665 "name": "BaseBdev4", 00:15:27.665 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:27.665 "is_configured": true, 00:15:27.665 "data_offset": 2048, 00:15:27.665 "data_size": 63488 00:15:27.665 } 00:15:27.665 ] 00:15:27.665 }' 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.665 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.922 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.922 23:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.856 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.856 "name": "raid_bdev1", 00:15:28.856 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:28.856 "strip_size_kb": 64, 00:15:28.856 "state": "online", 00:15:28.856 "raid_level": "raid5f", 00:15:28.856 "superblock": true, 00:15:28.856 "num_base_bdevs": 4, 00:15:28.856 "num_base_bdevs_discovered": 4, 00:15:28.856 "num_base_bdevs_operational": 4, 00:15:28.856 "process": { 00:15:28.857 "type": "rebuild", 00:15:28.857 "target": "spare", 00:15:28.857 "progress": { 00:15:28.857 "blocks": 109440, 00:15:28.857 "percent": 57 00:15:28.857 } 00:15:28.857 }, 00:15:28.857 "base_bdevs_list": [ 00:15:28.857 { 00:15:28.857 "name": "spare", 00:15:28.857 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:28.857 "is_configured": true, 00:15:28.857 "data_offset": 2048, 00:15:28.857 "data_size": 63488 00:15:28.857 }, 00:15:28.857 { 00:15:28.857 "name": "BaseBdev2", 00:15:28.857 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:28.857 "is_configured": true, 00:15:28.857 "data_offset": 2048, 00:15:28.857 "data_size": 63488 00:15:28.857 }, 00:15:28.857 { 00:15:28.857 "name": "BaseBdev3", 00:15:28.857 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:28.857 "is_configured": true, 00:15:28.857 "data_offset": 2048, 00:15:28.857 "data_size": 63488 00:15:28.857 }, 00:15:28.857 { 00:15:28.857 "name": "BaseBdev4", 00:15:28.857 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:28.857 "is_configured": true, 00:15:28.857 "data_offset": 2048, 00:15:28.857 "data_size": 63488 00:15:28.857 } 00:15:28.857 ] 00:15:28.857 }' 00:15:28.857 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.857 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.857 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.857 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.857 23:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.233 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.233 "name": "raid_bdev1", 00:15:30.233 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:30.233 "strip_size_kb": 64, 00:15:30.233 "state": "online", 00:15:30.233 "raid_level": "raid5f", 00:15:30.233 "superblock": true, 00:15:30.233 "num_base_bdevs": 4, 00:15:30.233 "num_base_bdevs_discovered": 4, 00:15:30.233 "num_base_bdevs_operational": 4, 00:15:30.233 "process": { 00:15:30.233 "type": "rebuild", 00:15:30.233 "target": "spare", 00:15:30.233 "progress": { 00:15:30.233 "blocks": 130560, 00:15:30.233 "percent": 68 00:15:30.233 } 00:15:30.233 }, 00:15:30.233 "base_bdevs_list": [ 00:15:30.233 { 00:15:30.233 "name": "spare", 00:15:30.233 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:30.233 "is_configured": true, 00:15:30.234 "data_offset": 2048, 00:15:30.234 "data_size": 63488 00:15:30.234 }, 00:15:30.234 { 00:15:30.234 "name": "BaseBdev2", 00:15:30.234 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:30.234 "is_configured": true, 00:15:30.234 "data_offset": 2048, 00:15:30.234 "data_size": 63488 00:15:30.234 }, 00:15:30.234 { 00:15:30.234 "name": "BaseBdev3", 00:15:30.234 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:30.234 "is_configured": true, 00:15:30.234 "data_offset": 2048, 00:15:30.234 "data_size": 63488 00:15:30.234 }, 00:15:30.234 { 00:15:30.234 "name": "BaseBdev4", 00:15:30.234 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:30.234 "is_configured": true, 00:15:30.234 "data_offset": 2048, 00:15:30.234 "data_size": 63488 00:15:30.234 } 00:15:30.234 ] 00:15:30.234 }' 00:15:30.234 23:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.234 23:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.234 23:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.234 23:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.234 23:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.169 "name": "raid_bdev1", 00:15:31.169 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:31.169 "strip_size_kb": 64, 00:15:31.169 "state": "online", 00:15:31.169 "raid_level": "raid5f", 00:15:31.169 "superblock": true, 00:15:31.169 "num_base_bdevs": 4, 00:15:31.169 "num_base_bdevs_discovered": 4, 00:15:31.169 "num_base_bdevs_operational": 4, 00:15:31.169 "process": { 00:15:31.169 "type": "rebuild", 00:15:31.169 "target": "spare", 00:15:31.169 "progress": { 00:15:31.169 "blocks": 151680, 00:15:31.169 "percent": 79 00:15:31.169 } 00:15:31.169 }, 00:15:31.169 "base_bdevs_list": [ 00:15:31.169 { 00:15:31.169 "name": "spare", 00:15:31.169 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:31.169 "is_configured": true, 00:15:31.169 "data_offset": 2048, 00:15:31.169 "data_size": 63488 00:15:31.169 }, 00:15:31.169 { 00:15:31.169 "name": "BaseBdev2", 00:15:31.169 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:31.169 "is_configured": true, 00:15:31.169 "data_offset": 2048, 00:15:31.169 "data_size": 63488 00:15:31.169 }, 00:15:31.169 { 00:15:31.169 "name": "BaseBdev3", 00:15:31.169 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:31.169 "is_configured": true, 00:15:31.169 "data_offset": 2048, 00:15:31.169 "data_size": 63488 00:15:31.169 }, 00:15:31.169 { 00:15:31.169 "name": "BaseBdev4", 00:15:31.169 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:31.169 "is_configured": true, 00:15:31.169 "data_offset": 2048, 00:15:31.169 "data_size": 63488 00:15:31.169 } 00:15:31.169 ] 00:15:31.169 }' 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.169 23:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.106 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.363 "name": "raid_bdev1", 00:15:32.363 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:32.363 "strip_size_kb": 64, 00:15:32.363 "state": "online", 00:15:32.363 "raid_level": "raid5f", 00:15:32.363 "superblock": true, 00:15:32.363 "num_base_bdevs": 4, 00:15:32.363 "num_base_bdevs_discovered": 4, 00:15:32.363 "num_base_bdevs_operational": 4, 00:15:32.363 "process": { 00:15:32.363 "type": "rebuild", 00:15:32.363 "target": "spare", 00:15:32.363 "progress": { 00:15:32.363 "blocks": 174720, 00:15:32.363 "percent": 91 00:15:32.363 } 00:15:32.363 }, 00:15:32.363 "base_bdevs_list": [ 00:15:32.363 { 00:15:32.363 "name": "spare", 00:15:32.363 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:32.363 "is_configured": true, 00:15:32.363 "data_offset": 2048, 00:15:32.363 "data_size": 63488 00:15:32.363 }, 00:15:32.363 { 00:15:32.363 "name": "BaseBdev2", 00:15:32.363 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:32.363 "is_configured": true, 00:15:32.363 "data_offset": 2048, 00:15:32.363 "data_size": 63488 00:15:32.363 }, 00:15:32.363 { 00:15:32.363 "name": "BaseBdev3", 00:15:32.363 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:32.363 "is_configured": true, 00:15:32.363 "data_offset": 2048, 00:15:32.363 "data_size": 63488 00:15:32.363 }, 00:15:32.363 { 00:15:32.363 "name": "BaseBdev4", 00:15:32.363 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:32.363 "is_configured": true, 00:15:32.363 "data_offset": 2048, 00:15:32.363 "data_size": 63488 00:15:32.363 } 00:15:32.363 ] 00:15:32.363 }' 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.363 23:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.301 [2024-11-26 23:00:12.139524] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:33.301 [2024-11-26 23:00:12.139599] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:33.301 [2024-11-26 23:00:12.139772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.301 "name": "raid_bdev1", 00:15:33.301 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:33.301 "strip_size_kb": 64, 00:15:33.301 "state": "online", 00:15:33.301 "raid_level": "raid5f", 00:15:33.301 "superblock": true, 00:15:33.301 "num_base_bdevs": 4, 00:15:33.301 "num_base_bdevs_discovered": 4, 00:15:33.301 "num_base_bdevs_operational": 4, 00:15:33.301 "base_bdevs_list": [ 00:15:33.301 { 00:15:33.301 "name": "spare", 00:15:33.301 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:33.301 "is_configured": true, 00:15:33.301 "data_offset": 2048, 00:15:33.301 "data_size": 63488 00:15:33.301 }, 00:15:33.301 { 00:15:33.301 "name": "BaseBdev2", 00:15:33.301 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:33.301 "is_configured": true, 00:15:33.301 "data_offset": 2048, 00:15:33.301 "data_size": 63488 00:15:33.301 }, 00:15:33.301 { 00:15:33.301 "name": "BaseBdev3", 00:15:33.301 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:33.301 "is_configured": true, 00:15:33.301 "data_offset": 2048, 00:15:33.301 "data_size": 63488 00:15:33.301 }, 00:15:33.301 { 00:15:33.301 "name": "BaseBdev4", 00:15:33.301 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:33.301 "is_configured": true, 00:15:33.301 "data_offset": 2048, 00:15:33.301 "data_size": 63488 00:15:33.301 } 00:15:33.301 ] 00:15:33.301 }' 00:15:33.301 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.559 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.560 "name": "raid_bdev1", 00:15:33.560 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:33.560 "strip_size_kb": 64, 00:15:33.560 "state": "online", 00:15:33.560 "raid_level": "raid5f", 00:15:33.560 "superblock": true, 00:15:33.560 "num_base_bdevs": 4, 00:15:33.560 "num_base_bdevs_discovered": 4, 00:15:33.560 "num_base_bdevs_operational": 4, 00:15:33.560 "base_bdevs_list": [ 00:15:33.560 { 00:15:33.560 "name": "spare", 00:15:33.560 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:33.560 "is_configured": true, 00:15:33.560 "data_offset": 2048, 00:15:33.560 "data_size": 63488 00:15:33.560 }, 00:15:33.560 { 00:15:33.560 "name": "BaseBdev2", 00:15:33.560 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:33.560 "is_configured": true, 00:15:33.560 "data_offset": 2048, 00:15:33.560 "data_size": 63488 00:15:33.560 }, 00:15:33.560 { 00:15:33.560 "name": "BaseBdev3", 00:15:33.560 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:33.560 "is_configured": true, 00:15:33.560 "data_offset": 2048, 00:15:33.560 "data_size": 63488 00:15:33.560 }, 00:15:33.560 { 00:15:33.560 "name": "BaseBdev4", 00:15:33.560 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:33.560 "is_configured": true, 00:15:33.560 "data_offset": 2048, 00:15:33.560 "data_size": 63488 00:15:33.560 } 00:15:33.560 ] 00:15:33.560 }' 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.560 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.819 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.819 "name": "raid_bdev1", 00:15:33.819 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:33.819 "strip_size_kb": 64, 00:15:33.819 "state": "online", 00:15:33.819 "raid_level": "raid5f", 00:15:33.819 "superblock": true, 00:15:33.819 "num_base_bdevs": 4, 00:15:33.819 "num_base_bdevs_discovered": 4, 00:15:33.819 "num_base_bdevs_operational": 4, 00:15:33.819 "base_bdevs_list": [ 00:15:33.819 { 00:15:33.819 "name": "spare", 00:15:33.819 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:33.819 "is_configured": true, 00:15:33.819 "data_offset": 2048, 00:15:33.819 "data_size": 63488 00:15:33.819 }, 00:15:33.819 { 00:15:33.819 "name": "BaseBdev2", 00:15:33.819 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:33.819 "is_configured": true, 00:15:33.819 "data_offset": 2048, 00:15:33.819 "data_size": 63488 00:15:33.819 }, 00:15:33.819 { 00:15:33.819 "name": "BaseBdev3", 00:15:33.819 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:33.819 "is_configured": true, 00:15:33.819 "data_offset": 2048, 00:15:33.819 "data_size": 63488 00:15:33.819 }, 00:15:33.819 { 00:15:33.819 "name": "BaseBdev4", 00:15:33.819 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:33.819 "is_configured": true, 00:15:33.819 "data_offset": 2048, 00:15:33.819 "data_size": 63488 00:15:33.819 } 00:15:33.819 ] 00:15:33.819 }' 00:15:33.819 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.819 23:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.079 [2024-11-26 23:00:13.105842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.079 [2024-11-26 23:00:13.105881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.079 [2024-11-26 23:00:13.105981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.079 [2024-11-26 23:00:13.106086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.079 [2024-11-26 23:00:13.106115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.079 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:34.339 /dev/nbd0 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.339 1+0 records in 00:15:34.339 1+0 records out 00:15:34.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324416 s, 12.6 MB/s 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.339 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:34.599 /dev/nbd1 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.599 1+0 records in 00:15:34.599 1+0 records out 00:15:34.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307812 s, 13.3 MB/s 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:34.599 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.600 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.600 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.600 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:34.600 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.600 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.860 23:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.120 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.120 [2024-11-26 23:00:14.138310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:35.120 [2024-11-26 23:00:14.138371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.120 [2024-11-26 23:00:14.138413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:35.120 [2024-11-26 23:00:14.138423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.120 [2024-11-26 23:00:14.140894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.121 [2024-11-26 23:00:14.140934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:35.121 [2024-11-26 23:00:14.141027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:35.121 [2024-11-26 23:00:14.141069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.121 [2024-11-26 23:00:14.141201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.121 [2024-11-26 23:00:14.141330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.121 [2024-11-26 23:00:14.141400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:35.121 spare 00:15:35.121 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.121 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:35.121 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.121 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.121 [2024-11-26 23:00:14.241491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:35.121 [2024-11-26 23:00:14.241529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.121 [2024-11-26 23:00:14.241844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:15:35.121 [2024-11-26 23:00:14.242372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:35.121 [2024-11-26 23:00:14.242398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:35.121 [2024-11-26 23:00:14.242556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.381 "name": "raid_bdev1", 00:15:35.381 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:35.381 "strip_size_kb": 64, 00:15:35.381 "state": "online", 00:15:35.381 "raid_level": "raid5f", 00:15:35.381 "superblock": true, 00:15:35.381 "num_base_bdevs": 4, 00:15:35.381 "num_base_bdevs_discovered": 4, 00:15:35.381 "num_base_bdevs_operational": 4, 00:15:35.381 "base_bdevs_list": [ 00:15:35.381 { 00:15:35.381 "name": "spare", 00:15:35.381 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:35.381 "is_configured": true, 00:15:35.381 "data_offset": 2048, 00:15:35.381 "data_size": 63488 00:15:35.381 }, 00:15:35.381 { 00:15:35.381 "name": "BaseBdev2", 00:15:35.381 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:35.381 "is_configured": true, 00:15:35.381 "data_offset": 2048, 00:15:35.381 "data_size": 63488 00:15:35.381 }, 00:15:35.381 { 00:15:35.381 "name": "BaseBdev3", 00:15:35.381 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:35.381 "is_configured": true, 00:15:35.381 "data_offset": 2048, 00:15:35.381 "data_size": 63488 00:15:35.381 }, 00:15:35.381 { 00:15:35.381 "name": "BaseBdev4", 00:15:35.381 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:35.381 "is_configured": true, 00:15:35.381 "data_offset": 2048, 00:15:35.381 "data_size": 63488 00:15:35.381 } 00:15:35.381 ] 00:15:35.381 }' 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.381 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.644 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.644 "name": "raid_bdev1", 00:15:35.644 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:35.644 "strip_size_kb": 64, 00:15:35.644 "state": "online", 00:15:35.644 "raid_level": "raid5f", 00:15:35.644 "superblock": true, 00:15:35.644 "num_base_bdevs": 4, 00:15:35.644 "num_base_bdevs_discovered": 4, 00:15:35.644 "num_base_bdevs_operational": 4, 00:15:35.644 "base_bdevs_list": [ 00:15:35.644 { 00:15:35.644 "name": "spare", 00:15:35.644 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:35.644 "is_configured": true, 00:15:35.645 "data_offset": 2048, 00:15:35.645 "data_size": 63488 00:15:35.645 }, 00:15:35.645 { 00:15:35.645 "name": "BaseBdev2", 00:15:35.645 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:35.645 "is_configured": true, 00:15:35.645 "data_offset": 2048, 00:15:35.645 "data_size": 63488 00:15:35.645 }, 00:15:35.645 { 00:15:35.645 "name": "BaseBdev3", 00:15:35.645 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:35.645 "is_configured": true, 00:15:35.645 "data_offset": 2048, 00:15:35.645 "data_size": 63488 00:15:35.645 }, 00:15:35.645 { 00:15:35.645 "name": "BaseBdev4", 00:15:35.645 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:35.645 "is_configured": true, 00:15:35.645 "data_offset": 2048, 00:15:35.645 "data_size": 63488 00:15:35.645 } 00:15:35.645 ] 00:15:35.645 }' 00:15:35.645 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.915 [2024-11-26 23:00:14.894735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.915 "name": "raid_bdev1", 00:15:35.915 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:35.915 "strip_size_kb": 64, 00:15:35.915 "state": "online", 00:15:35.915 "raid_level": "raid5f", 00:15:35.915 "superblock": true, 00:15:35.915 "num_base_bdevs": 4, 00:15:35.915 "num_base_bdevs_discovered": 3, 00:15:35.915 "num_base_bdevs_operational": 3, 00:15:35.915 "base_bdevs_list": [ 00:15:35.915 { 00:15:35.915 "name": null, 00:15:35.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.915 "is_configured": false, 00:15:35.915 "data_offset": 0, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev2", 00:15:35.915 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev3", 00:15:35.915 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev4", 00:15:35.915 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 } 00:15:35.915 ] 00:15:35.915 }' 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.915 23:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.264 23:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.264 23:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.264 23:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.264 [2024-11-26 23:00:15.346892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.264 [2024-11-26 23:00:15.347137] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.264 [2024-11-26 23:00:15.347166] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.264 [2024-11-26 23:00:15.347211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.264 [2024-11-26 23:00:15.354233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:15:36.264 23:00:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.264 23:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:36.264 [2024-11-26 23:00:15.356789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.644 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.644 "name": "raid_bdev1", 00:15:37.644 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:37.644 "strip_size_kb": 64, 00:15:37.644 "state": "online", 00:15:37.644 "raid_level": "raid5f", 00:15:37.644 "superblock": true, 00:15:37.644 "num_base_bdevs": 4, 00:15:37.644 "num_base_bdevs_discovered": 4, 00:15:37.644 "num_base_bdevs_operational": 4, 00:15:37.644 "process": { 00:15:37.644 "type": "rebuild", 00:15:37.644 "target": "spare", 00:15:37.644 "progress": { 00:15:37.644 "blocks": 19200, 00:15:37.644 "percent": 10 00:15:37.644 } 00:15:37.645 }, 00:15:37.645 "base_bdevs_list": [ 00:15:37.645 { 00:15:37.645 "name": "spare", 00:15:37.645 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev2", 00:15:37.645 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev3", 00:15:37.645 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev4", 00:15:37.645 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 } 00:15:37.645 ] 00:15:37.645 }' 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.645 [2024-11-26 23:00:16.494855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.645 [2024-11-26 23:00:16.565325] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.645 [2024-11-26 23:00:16.565408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.645 [2024-11-26 23:00:16.565427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.645 [2024-11-26 23:00:16.565437] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.645 "name": "raid_bdev1", 00:15:37.645 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:37.645 "strip_size_kb": 64, 00:15:37.645 "state": "online", 00:15:37.645 "raid_level": "raid5f", 00:15:37.645 "superblock": true, 00:15:37.645 "num_base_bdevs": 4, 00:15:37.645 "num_base_bdevs_discovered": 3, 00:15:37.645 "num_base_bdevs_operational": 3, 00:15:37.645 "base_bdevs_list": [ 00:15:37.645 { 00:15:37.645 "name": null, 00:15:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.645 "is_configured": false, 00:15:37.645 "data_offset": 0, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev2", 00:15:37.645 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev3", 00:15:37.645 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 }, 00:15:37.645 { 00:15:37.645 "name": "BaseBdev4", 00:15:37.645 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:37.645 "is_configured": true, 00:15:37.645 "data_offset": 2048, 00:15:37.645 "data_size": 63488 00:15:37.645 } 00:15:37.645 ] 00:15:37.645 }' 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.645 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.905 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.905 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.905 23:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.905 [2024-11-26 23:00:16.998709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.905 [2024-11-26 23:00:16.998793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.905 [2024-11-26 23:00:16.998821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:37.905 [2024-11-26 23:00:16.998833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.905 [2024-11-26 23:00:16.999337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.905 [2024-11-26 23:00:16.999367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.905 [2024-11-26 23:00:16.999455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:37.905 [2024-11-26 23:00:16.999475] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.905 [2024-11-26 23:00:16.999485] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:37.905 [2024-11-26 23:00:16.999513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.905 [2024-11-26 23:00:17.005752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:15:37.905 spare 00:15:37.905 23:00:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.905 23:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:37.905 [2024-11-26 23:00:17.008281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.315 "name": "raid_bdev1", 00:15:39.315 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:39.315 "strip_size_kb": 64, 00:15:39.315 "state": "online", 00:15:39.315 "raid_level": "raid5f", 00:15:39.315 "superblock": true, 00:15:39.315 "num_base_bdevs": 4, 00:15:39.315 "num_base_bdevs_discovered": 4, 00:15:39.315 "num_base_bdevs_operational": 4, 00:15:39.315 "process": { 00:15:39.315 "type": "rebuild", 00:15:39.315 "target": "spare", 00:15:39.315 "progress": { 00:15:39.315 "blocks": 19200, 00:15:39.315 "percent": 10 00:15:39.315 } 00:15:39.315 }, 00:15:39.315 "base_bdevs_list": [ 00:15:39.315 { 00:15:39.315 "name": "spare", 00:15:39.315 "uuid": "a1227639-9663-5251-8826-34bd87d32bcd", 00:15:39.315 "is_configured": true, 00:15:39.315 "data_offset": 2048, 00:15:39.315 "data_size": 63488 00:15:39.315 }, 00:15:39.315 { 00:15:39.315 "name": "BaseBdev2", 00:15:39.315 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:39.315 "is_configured": true, 00:15:39.315 "data_offset": 2048, 00:15:39.315 "data_size": 63488 00:15:39.315 }, 00:15:39.315 { 00:15:39.315 "name": "BaseBdev3", 00:15:39.315 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:39.315 "is_configured": true, 00:15:39.315 "data_offset": 2048, 00:15:39.315 "data_size": 63488 00:15:39.315 }, 00:15:39.315 { 00:15:39.315 "name": "BaseBdev4", 00:15:39.315 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:39.315 "is_configured": true, 00:15:39.315 "data_offset": 2048, 00:15:39.315 "data_size": 63488 00:15:39.315 } 00:15:39.315 ] 00:15:39.315 }' 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.315 [2024-11-26 23:00:18.166163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.315 [2024-11-26 23:00:18.216762] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.315 [2024-11-26 23:00:18.216820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.315 [2024-11-26 23:00:18.216855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.315 [2024-11-26 23:00:18.216863] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.315 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.316 "name": "raid_bdev1", 00:15:39.316 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:39.316 "strip_size_kb": 64, 00:15:39.316 "state": "online", 00:15:39.316 "raid_level": "raid5f", 00:15:39.316 "superblock": true, 00:15:39.316 "num_base_bdevs": 4, 00:15:39.316 "num_base_bdevs_discovered": 3, 00:15:39.316 "num_base_bdevs_operational": 3, 00:15:39.316 "base_bdevs_list": [ 00:15:39.316 { 00:15:39.316 "name": null, 00:15:39.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.316 "is_configured": false, 00:15:39.316 "data_offset": 0, 00:15:39.316 "data_size": 63488 00:15:39.316 }, 00:15:39.316 { 00:15:39.316 "name": "BaseBdev2", 00:15:39.316 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:39.316 "is_configured": true, 00:15:39.316 "data_offset": 2048, 00:15:39.316 "data_size": 63488 00:15:39.316 }, 00:15:39.316 { 00:15:39.316 "name": "BaseBdev3", 00:15:39.316 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:39.316 "is_configured": true, 00:15:39.316 "data_offset": 2048, 00:15:39.316 "data_size": 63488 00:15:39.316 }, 00:15:39.316 { 00:15:39.316 "name": "BaseBdev4", 00:15:39.316 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:39.316 "is_configured": true, 00:15:39.316 "data_offset": 2048, 00:15:39.316 "data_size": 63488 00:15:39.316 } 00:15:39.316 ] 00:15:39.316 }' 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.316 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.575 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.576 "name": "raid_bdev1", 00:15:39.576 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:39.576 "strip_size_kb": 64, 00:15:39.576 "state": "online", 00:15:39.576 "raid_level": "raid5f", 00:15:39.576 "superblock": true, 00:15:39.576 "num_base_bdevs": 4, 00:15:39.576 "num_base_bdevs_discovered": 3, 00:15:39.576 "num_base_bdevs_operational": 3, 00:15:39.576 "base_bdevs_list": [ 00:15:39.576 { 00:15:39.576 "name": null, 00:15:39.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.576 "is_configured": false, 00:15:39.576 "data_offset": 0, 00:15:39.576 "data_size": 63488 00:15:39.576 }, 00:15:39.576 { 00:15:39.576 "name": "BaseBdev2", 00:15:39.576 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:39.576 "is_configured": true, 00:15:39.576 "data_offset": 2048, 00:15:39.576 "data_size": 63488 00:15:39.576 }, 00:15:39.576 { 00:15:39.576 "name": "BaseBdev3", 00:15:39.576 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:39.576 "is_configured": true, 00:15:39.576 "data_offset": 2048, 00:15:39.576 "data_size": 63488 00:15:39.576 }, 00:15:39.576 { 00:15:39.576 "name": "BaseBdev4", 00:15:39.576 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:39.576 "is_configured": true, 00:15:39.576 "data_offset": 2048, 00:15:39.576 "data_size": 63488 00:15:39.576 } 00:15:39.576 ] 00:15:39.576 }' 00:15:39.576 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.837 [2024-11-26 23:00:18.777934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.837 [2024-11-26 23:00:18.777995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.837 [2024-11-26 23:00:18.778036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:39.837 [2024-11-26 23:00:18.778046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.837 [2024-11-26 23:00:18.778541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.837 [2024-11-26 23:00:18.778572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.837 [2024-11-26 23:00:18.778666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:39.837 [2024-11-26 23:00:18.778681] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.837 [2024-11-26 23:00:18.778695] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.837 [2024-11-26 23:00:18.778707] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:39.837 BaseBdev1 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.837 23:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.782 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.783 "name": "raid_bdev1", 00:15:40.783 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:40.783 "strip_size_kb": 64, 00:15:40.783 "state": "online", 00:15:40.783 "raid_level": "raid5f", 00:15:40.783 "superblock": true, 00:15:40.783 "num_base_bdevs": 4, 00:15:40.783 "num_base_bdevs_discovered": 3, 00:15:40.783 "num_base_bdevs_operational": 3, 00:15:40.783 "base_bdevs_list": [ 00:15:40.783 { 00:15:40.783 "name": null, 00:15:40.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.783 "is_configured": false, 00:15:40.783 "data_offset": 0, 00:15:40.783 "data_size": 63488 00:15:40.783 }, 00:15:40.783 { 00:15:40.783 "name": "BaseBdev2", 00:15:40.783 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:40.783 "is_configured": true, 00:15:40.783 "data_offset": 2048, 00:15:40.783 "data_size": 63488 00:15:40.783 }, 00:15:40.783 { 00:15:40.783 "name": "BaseBdev3", 00:15:40.783 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:40.783 "is_configured": true, 00:15:40.783 "data_offset": 2048, 00:15:40.783 "data_size": 63488 00:15:40.783 }, 00:15:40.783 { 00:15:40.783 "name": "BaseBdev4", 00:15:40.783 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:40.783 "is_configured": true, 00:15:40.783 "data_offset": 2048, 00:15:40.783 "data_size": 63488 00:15:40.783 } 00:15:40.783 ] 00:15:40.783 }' 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.783 23:00:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.352 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.352 "name": "raid_bdev1", 00:15:41.352 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:41.352 "strip_size_kb": 64, 00:15:41.352 "state": "online", 00:15:41.352 "raid_level": "raid5f", 00:15:41.352 "superblock": true, 00:15:41.352 "num_base_bdevs": 4, 00:15:41.352 "num_base_bdevs_discovered": 3, 00:15:41.352 "num_base_bdevs_operational": 3, 00:15:41.352 "base_bdevs_list": [ 00:15:41.352 { 00:15:41.352 "name": null, 00:15:41.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.352 "is_configured": false, 00:15:41.352 "data_offset": 0, 00:15:41.352 "data_size": 63488 00:15:41.352 }, 00:15:41.352 { 00:15:41.352 "name": "BaseBdev2", 00:15:41.352 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:41.352 "is_configured": true, 00:15:41.352 "data_offset": 2048, 00:15:41.352 "data_size": 63488 00:15:41.352 }, 00:15:41.352 { 00:15:41.352 "name": "BaseBdev3", 00:15:41.352 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:41.352 "is_configured": true, 00:15:41.352 "data_offset": 2048, 00:15:41.352 "data_size": 63488 00:15:41.352 }, 00:15:41.352 { 00:15:41.352 "name": "BaseBdev4", 00:15:41.352 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:41.352 "is_configured": true, 00:15:41.352 "data_offset": 2048, 00:15:41.352 "data_size": 63488 00:15:41.352 } 00:15:41.352 ] 00:15:41.352 }' 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.353 [2024-11-26 23:00:20.366391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.353 [2024-11-26 23:00:20.366582] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.353 [2024-11-26 23:00:20.366601] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:41.353 request: 00:15:41.353 { 00:15:41.353 "base_bdev": "BaseBdev1", 00:15:41.353 "raid_bdev": "raid_bdev1", 00:15:41.353 "method": "bdev_raid_add_base_bdev", 00:15:41.353 "req_id": 1 00:15:41.353 } 00:15:41.353 Got JSON-RPC error response 00:15:41.353 response: 00:15:41.353 { 00:15:41.353 "code": -22, 00:15:41.353 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:41.353 } 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.353 23:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.553 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.553 "name": "raid_bdev1", 00:15:42.553 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:42.553 "strip_size_kb": 64, 00:15:42.553 "state": "online", 00:15:42.553 "raid_level": "raid5f", 00:15:42.553 "superblock": true, 00:15:42.553 "num_base_bdevs": 4, 00:15:42.553 "num_base_bdevs_discovered": 3, 00:15:42.553 "num_base_bdevs_operational": 3, 00:15:42.553 "base_bdevs_list": [ 00:15:42.553 { 00:15:42.553 "name": null, 00:15:42.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.553 "is_configured": false, 00:15:42.553 "data_offset": 0, 00:15:42.553 "data_size": 63488 00:15:42.553 }, 00:15:42.553 { 00:15:42.553 "name": "BaseBdev2", 00:15:42.553 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:42.553 "is_configured": true, 00:15:42.553 "data_offset": 2048, 00:15:42.553 "data_size": 63488 00:15:42.553 }, 00:15:42.553 { 00:15:42.553 "name": "BaseBdev3", 00:15:42.553 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:42.553 "is_configured": true, 00:15:42.553 "data_offset": 2048, 00:15:42.553 "data_size": 63488 00:15:42.553 }, 00:15:42.553 { 00:15:42.553 "name": "BaseBdev4", 00:15:42.553 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:42.553 "is_configured": true, 00:15:42.553 "data_offset": 2048, 00:15:42.553 "data_size": 63488 00:15:42.553 } 00:15:42.553 ] 00:15:42.553 }' 00:15:42.553 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.553 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.812 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.812 "name": "raid_bdev1", 00:15:42.813 "uuid": "e737fd4d-6c70-48d5-a0eb-4256a0019f4b", 00:15:42.813 "strip_size_kb": 64, 00:15:42.813 "state": "online", 00:15:42.813 "raid_level": "raid5f", 00:15:42.813 "superblock": true, 00:15:42.813 "num_base_bdevs": 4, 00:15:42.813 "num_base_bdevs_discovered": 3, 00:15:42.813 "num_base_bdevs_operational": 3, 00:15:42.813 "base_bdevs_list": [ 00:15:42.813 { 00:15:42.813 "name": null, 00:15:42.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.813 "is_configured": false, 00:15:42.813 "data_offset": 0, 00:15:42.813 "data_size": 63488 00:15:42.813 }, 00:15:42.813 { 00:15:42.813 "name": "BaseBdev2", 00:15:42.813 "uuid": "b4be5972-9d53-51e1-b41a-716252274edb", 00:15:42.813 "is_configured": true, 00:15:42.813 "data_offset": 2048, 00:15:42.813 "data_size": 63488 00:15:42.813 }, 00:15:42.813 { 00:15:42.813 "name": "BaseBdev3", 00:15:42.813 "uuid": "06712b97-06d1-5ece-b6f6-10ec65dfeb0b", 00:15:42.813 "is_configured": true, 00:15:42.813 "data_offset": 2048, 00:15:42.813 "data_size": 63488 00:15:42.813 }, 00:15:42.813 { 00:15:42.813 "name": "BaseBdev4", 00:15:42.813 "uuid": "9083f025-af33-5df2-9c86-6d15c5ebb85c", 00:15:42.813 "is_configured": true, 00:15:42.813 "data_offset": 2048, 00:15:42.813 "data_size": 63488 00:15:42.813 } 00:15:42.813 ] 00:15:42.813 }' 00:15:42.813 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.074 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.074 23:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97140 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97140 ']' 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97140 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97140 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.074 killing process with pid 97140 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97140' 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97140 00:15:43.074 Received shutdown signal, test time was about 60.000000 seconds 00:15:43.074 00:15:43.074 Latency(us) 00:15:43.074 [2024-11-26T23:00:22.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.074 [2024-11-26T23:00:22.202Z] =================================================================================================================== 00:15:43.074 [2024-11-26T23:00:22.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:43.074 [2024-11-26 23:00:22.056828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.074 [2024-11-26 23:00:22.056961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.074 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97140 00:15:43.074 [2024-11-26 23:00:22.057037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.074 [2024-11-26 23:00:22.057049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:43.074 [2024-11-26 23:00:22.107778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.335 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:43.335 00:15:43.335 real 0m25.175s 00:15:43.335 user 0m31.881s 00:15:43.335 sys 0m3.130s 00:15:43.335 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.335 23:00:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.335 ************************************ 00:15:43.335 END TEST raid5f_rebuild_test_sb 00:15:43.335 ************************************ 00:15:43.335 23:00:22 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:43.335 23:00:22 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:43.335 23:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:43.335 23:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.335 23:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.335 ************************************ 00:15:43.335 START TEST raid_state_function_test_sb_4k 00:15:43.335 ************************************ 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=97938 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:43.335 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97938' 00:15:43.335 Process raid pid: 97938 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 97938 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 97938 ']' 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.336 23:00:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.595 [2024-11-26 23:00:22.505863] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:43.595 [2024-11-26 23:00:22.505975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.595 [2024-11-26 23:00:22.641431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.595 [2024-11-26 23:00:22.681359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.595 [2024-11-26 23:00:22.707862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.855 [2024-11-26 23:00:22.751317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.855 [2024-11-26 23:00:22.751350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.425 [2024-11-26 23:00:23.331473] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.425 [2024-11-26 23:00:23.331525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.425 [2024-11-26 23:00:23.331537] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.425 [2024-11-26 23:00:23.331545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.425 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.426 "name": "Existed_Raid", 00:15:44.426 "uuid": "ad1974f6-7bc0-4eb8-a661-ba07e3667120", 00:15:44.426 "strip_size_kb": 0, 00:15:44.426 "state": "configuring", 00:15:44.426 "raid_level": "raid1", 00:15:44.426 "superblock": true, 00:15:44.426 "num_base_bdevs": 2, 00:15:44.426 "num_base_bdevs_discovered": 0, 00:15:44.426 "num_base_bdevs_operational": 2, 00:15:44.426 "base_bdevs_list": [ 00:15:44.426 { 00:15:44.426 "name": "BaseBdev1", 00:15:44.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.426 "is_configured": false, 00:15:44.426 "data_offset": 0, 00:15:44.426 "data_size": 0 00:15:44.426 }, 00:15:44.426 { 00:15:44.426 "name": "BaseBdev2", 00:15:44.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.426 "is_configured": false, 00:15:44.426 "data_offset": 0, 00:15:44.426 "data_size": 0 00:15:44.426 } 00:15:44.426 ] 00:15:44.426 }' 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.426 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.686 [2024-11-26 23:00:23.787492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.686 [2024-11-26 23:00:23.787574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.686 [2024-11-26 23:00:23.799520] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.686 [2024-11-26 23:00:23.799587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.686 [2024-11-26 23:00:23.799614] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.686 [2024-11-26 23:00:23.799634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.686 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [2024-11-26 23:00:23.820625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.947 BaseBdev1 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [ 00:15:44.947 { 00:15:44.947 "name": "BaseBdev1", 00:15:44.947 "aliases": [ 00:15:44.947 "fd6b0b32-e60f-41fd-85b4-dd229777def7" 00:15:44.947 ], 00:15:44.947 "product_name": "Malloc disk", 00:15:44.947 "block_size": 4096, 00:15:44.947 "num_blocks": 8192, 00:15:44.947 "uuid": "fd6b0b32-e60f-41fd-85b4-dd229777def7", 00:15:44.947 "assigned_rate_limits": { 00:15:44.947 "rw_ios_per_sec": 0, 00:15:44.947 "rw_mbytes_per_sec": 0, 00:15:44.947 "r_mbytes_per_sec": 0, 00:15:44.947 "w_mbytes_per_sec": 0 00:15:44.947 }, 00:15:44.947 "claimed": true, 00:15:44.947 "claim_type": "exclusive_write", 00:15:44.947 "zoned": false, 00:15:44.947 "supported_io_types": { 00:15:44.947 "read": true, 00:15:44.947 "write": true, 00:15:44.947 "unmap": true, 00:15:44.947 "flush": true, 00:15:44.947 "reset": true, 00:15:44.947 "nvme_admin": false, 00:15:44.947 "nvme_io": false, 00:15:44.947 "nvme_io_md": false, 00:15:44.947 "write_zeroes": true, 00:15:44.947 "zcopy": true, 00:15:44.947 "get_zone_info": false, 00:15:44.948 "zone_management": false, 00:15:44.948 "zone_append": false, 00:15:44.948 "compare": false, 00:15:44.948 "compare_and_write": false, 00:15:44.948 "abort": true, 00:15:44.948 "seek_hole": false, 00:15:44.948 "seek_data": false, 00:15:44.948 "copy": true, 00:15:44.948 "nvme_iov_md": false 00:15:44.948 }, 00:15:44.948 "memory_domains": [ 00:15:44.948 { 00:15:44.948 "dma_device_id": "system", 00:15:44.948 "dma_device_type": 1 00:15:44.948 }, 00:15:44.948 { 00:15:44.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.948 "dma_device_type": 2 00:15:44.948 } 00:15:44.948 ], 00:15:44.948 "driver_specific": {} 00:15:44.948 } 00:15:44.948 ] 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.948 "name": "Existed_Raid", 00:15:44.948 "uuid": "9791848b-d4c1-455f-88ea-ef2e8c8cf1b4", 00:15:44.948 "strip_size_kb": 0, 00:15:44.948 "state": "configuring", 00:15:44.948 "raid_level": "raid1", 00:15:44.948 "superblock": true, 00:15:44.948 "num_base_bdevs": 2, 00:15:44.948 "num_base_bdevs_discovered": 1, 00:15:44.948 "num_base_bdevs_operational": 2, 00:15:44.948 "base_bdevs_list": [ 00:15:44.948 { 00:15:44.948 "name": "BaseBdev1", 00:15:44.948 "uuid": "fd6b0b32-e60f-41fd-85b4-dd229777def7", 00:15:44.948 "is_configured": true, 00:15:44.948 "data_offset": 256, 00:15:44.948 "data_size": 7936 00:15:44.948 }, 00:15:44.948 { 00:15:44.948 "name": "BaseBdev2", 00:15:44.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.948 "is_configured": false, 00:15:44.948 "data_offset": 0, 00:15:44.948 "data_size": 0 00:15:44.948 } 00:15:44.948 ] 00:15:44.948 }' 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.948 23:00:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.209 [2024-11-26 23:00:24.276734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.209 [2024-11-26 23:00:24.276774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.209 [2024-11-26 23:00:24.288768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.209 [2024-11-26 23:00:24.290529] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.209 [2024-11-26 23:00:24.290611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.209 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.469 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.469 "name": "Existed_Raid", 00:15:45.469 "uuid": "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae", 00:15:45.469 "strip_size_kb": 0, 00:15:45.469 "state": "configuring", 00:15:45.469 "raid_level": "raid1", 00:15:45.469 "superblock": true, 00:15:45.469 "num_base_bdevs": 2, 00:15:45.469 "num_base_bdevs_discovered": 1, 00:15:45.469 "num_base_bdevs_operational": 2, 00:15:45.469 "base_bdevs_list": [ 00:15:45.469 { 00:15:45.469 "name": "BaseBdev1", 00:15:45.469 "uuid": "fd6b0b32-e60f-41fd-85b4-dd229777def7", 00:15:45.469 "is_configured": true, 00:15:45.469 "data_offset": 256, 00:15:45.469 "data_size": 7936 00:15:45.469 }, 00:15:45.469 { 00:15:45.469 "name": "BaseBdev2", 00:15:45.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.469 "is_configured": false, 00:15:45.469 "data_offset": 0, 00:15:45.469 "data_size": 0 00:15:45.469 } 00:15:45.469 ] 00:15:45.469 }' 00:15:45.469 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.469 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.730 [2024-11-26 23:00:24.731873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.730 [2024-11-26 23:00:24.732104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:45.730 [2024-11-26 23:00:24.732146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:45.730 BaseBdev2 00:15:45.730 [2024-11-26 23:00:24.732431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:45.730 [2024-11-26 23:00:24.732587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:45.730 [2024-11-26 23:00:24.732639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:45.730 [2024-11-26 23:00:24.732804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.730 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.730 [ 00:15:45.730 { 00:15:45.731 "name": "BaseBdev2", 00:15:45.731 "aliases": [ 00:15:45.731 "00bee735-cf8a-41c3-800b-787626896fec" 00:15:45.731 ], 00:15:45.731 "product_name": "Malloc disk", 00:15:45.731 "block_size": 4096, 00:15:45.731 "num_blocks": 8192, 00:15:45.731 "uuid": "00bee735-cf8a-41c3-800b-787626896fec", 00:15:45.731 "assigned_rate_limits": { 00:15:45.731 "rw_ios_per_sec": 0, 00:15:45.731 "rw_mbytes_per_sec": 0, 00:15:45.731 "r_mbytes_per_sec": 0, 00:15:45.731 "w_mbytes_per_sec": 0 00:15:45.731 }, 00:15:45.731 "claimed": true, 00:15:45.731 "claim_type": "exclusive_write", 00:15:45.731 "zoned": false, 00:15:45.731 "supported_io_types": { 00:15:45.731 "read": true, 00:15:45.731 "write": true, 00:15:45.731 "unmap": true, 00:15:45.731 "flush": true, 00:15:45.731 "reset": true, 00:15:45.731 "nvme_admin": false, 00:15:45.731 "nvme_io": false, 00:15:45.731 "nvme_io_md": false, 00:15:45.731 "write_zeroes": true, 00:15:45.731 "zcopy": true, 00:15:45.731 "get_zone_info": false, 00:15:45.731 "zone_management": false, 00:15:45.731 "zone_append": false, 00:15:45.731 "compare": false, 00:15:45.731 "compare_and_write": false, 00:15:45.731 "abort": true, 00:15:45.731 "seek_hole": false, 00:15:45.731 "seek_data": false, 00:15:45.731 "copy": true, 00:15:45.731 "nvme_iov_md": false 00:15:45.731 }, 00:15:45.731 "memory_domains": [ 00:15:45.731 { 00:15:45.731 "dma_device_id": "system", 00:15:45.731 "dma_device_type": 1 00:15:45.731 }, 00:15:45.731 { 00:15:45.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.731 "dma_device_type": 2 00:15:45.731 } 00:15:45.731 ], 00:15:45.731 "driver_specific": {} 00:15:45.731 } 00:15:45.731 ] 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.731 "name": "Existed_Raid", 00:15:45.731 "uuid": "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae", 00:15:45.731 "strip_size_kb": 0, 00:15:45.731 "state": "online", 00:15:45.731 "raid_level": "raid1", 00:15:45.731 "superblock": true, 00:15:45.731 "num_base_bdevs": 2, 00:15:45.731 "num_base_bdevs_discovered": 2, 00:15:45.731 "num_base_bdevs_operational": 2, 00:15:45.731 "base_bdevs_list": [ 00:15:45.731 { 00:15:45.731 "name": "BaseBdev1", 00:15:45.731 "uuid": "fd6b0b32-e60f-41fd-85b4-dd229777def7", 00:15:45.731 "is_configured": true, 00:15:45.731 "data_offset": 256, 00:15:45.731 "data_size": 7936 00:15:45.731 }, 00:15:45.731 { 00:15:45.731 "name": "BaseBdev2", 00:15:45.731 "uuid": "00bee735-cf8a-41c3-800b-787626896fec", 00:15:45.731 "is_configured": true, 00:15:45.731 "data_offset": 256, 00:15:45.731 "data_size": 7936 00:15:45.731 } 00:15:45.731 ] 00:15:45.731 }' 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.731 23:00:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.301 [2024-11-26 23:00:25.248308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.301 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.301 "name": "Existed_Raid", 00:15:46.301 "aliases": [ 00:15:46.302 "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae" 00:15:46.302 ], 00:15:46.302 "product_name": "Raid Volume", 00:15:46.302 "block_size": 4096, 00:15:46.302 "num_blocks": 7936, 00:15:46.302 "uuid": "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae", 00:15:46.302 "assigned_rate_limits": { 00:15:46.302 "rw_ios_per_sec": 0, 00:15:46.302 "rw_mbytes_per_sec": 0, 00:15:46.302 "r_mbytes_per_sec": 0, 00:15:46.302 "w_mbytes_per_sec": 0 00:15:46.302 }, 00:15:46.302 "claimed": false, 00:15:46.302 "zoned": false, 00:15:46.302 "supported_io_types": { 00:15:46.302 "read": true, 00:15:46.302 "write": true, 00:15:46.302 "unmap": false, 00:15:46.302 "flush": false, 00:15:46.302 "reset": true, 00:15:46.302 "nvme_admin": false, 00:15:46.302 "nvme_io": false, 00:15:46.302 "nvme_io_md": false, 00:15:46.302 "write_zeroes": true, 00:15:46.302 "zcopy": false, 00:15:46.302 "get_zone_info": false, 00:15:46.302 "zone_management": false, 00:15:46.302 "zone_append": false, 00:15:46.302 "compare": false, 00:15:46.302 "compare_and_write": false, 00:15:46.302 "abort": false, 00:15:46.302 "seek_hole": false, 00:15:46.302 "seek_data": false, 00:15:46.302 "copy": false, 00:15:46.302 "nvme_iov_md": false 00:15:46.302 }, 00:15:46.302 "memory_domains": [ 00:15:46.302 { 00:15:46.302 "dma_device_id": "system", 00:15:46.302 "dma_device_type": 1 00:15:46.302 }, 00:15:46.302 { 00:15:46.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.302 "dma_device_type": 2 00:15:46.302 }, 00:15:46.302 { 00:15:46.302 "dma_device_id": "system", 00:15:46.302 "dma_device_type": 1 00:15:46.302 }, 00:15:46.302 { 00:15:46.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.302 "dma_device_type": 2 00:15:46.302 } 00:15:46.302 ], 00:15:46.302 "driver_specific": { 00:15:46.302 "raid": { 00:15:46.302 "uuid": "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae", 00:15:46.302 "strip_size_kb": 0, 00:15:46.302 "state": "online", 00:15:46.302 "raid_level": "raid1", 00:15:46.302 "superblock": true, 00:15:46.302 "num_base_bdevs": 2, 00:15:46.302 "num_base_bdevs_discovered": 2, 00:15:46.302 "num_base_bdevs_operational": 2, 00:15:46.302 "base_bdevs_list": [ 00:15:46.302 { 00:15:46.302 "name": "BaseBdev1", 00:15:46.302 "uuid": "fd6b0b32-e60f-41fd-85b4-dd229777def7", 00:15:46.302 "is_configured": true, 00:15:46.302 "data_offset": 256, 00:15:46.302 "data_size": 7936 00:15:46.302 }, 00:15:46.302 { 00:15:46.302 "name": "BaseBdev2", 00:15:46.302 "uuid": "00bee735-cf8a-41c3-800b-787626896fec", 00:15:46.302 "is_configured": true, 00:15:46.302 "data_offset": 256, 00:15:46.302 "data_size": 7936 00:15:46.302 } 00:15:46.302 ] 00:15:46.302 } 00:15:46.302 } 00:15:46.302 }' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:46.302 BaseBdev2' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.302 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.563 [2024-11-26 23:00:25.472182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.563 "name": "Existed_Raid", 00:15:46.563 "uuid": "c9ef2a20-fd6c-4df5-b6eb-f1f01f856fae", 00:15:46.563 "strip_size_kb": 0, 00:15:46.563 "state": "online", 00:15:46.563 "raid_level": "raid1", 00:15:46.563 "superblock": true, 00:15:46.563 "num_base_bdevs": 2, 00:15:46.563 "num_base_bdevs_discovered": 1, 00:15:46.563 "num_base_bdevs_operational": 1, 00:15:46.563 "base_bdevs_list": [ 00:15:46.563 { 00:15:46.563 "name": null, 00:15:46.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.563 "is_configured": false, 00:15:46.563 "data_offset": 0, 00:15:46.563 "data_size": 7936 00:15:46.563 }, 00:15:46.563 { 00:15:46.563 "name": "BaseBdev2", 00:15:46.563 "uuid": "00bee735-cf8a-41c3-800b-787626896fec", 00:15:46.563 "is_configured": true, 00:15:46.563 "data_offset": 256, 00:15:46.563 "data_size": 7936 00:15:46.563 } 00:15:46.563 ] 00:15:46.563 }' 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.563 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.826 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.826 [2024-11-26 23:00:25.939489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.826 [2024-11-26 23:00:25.939627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.826 [2024-11-26 23:00:25.951192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.085 [2024-11-26 23:00:25.951325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.085 [2024-11-26 23:00:25.951339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.085 23:00:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 97938 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 97938 ']' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 97938 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97938 00:15:47.085 killing process with pid 97938 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97938' 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 97938 00:15:47.085 [2024-11-26 23:00:26.048856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.085 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 97938 00:15:47.085 [2024-11-26 23:00:26.049803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.346 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:47.346 00:15:47.346 real 0m3.874s 00:15:47.346 user 0m6.043s 00:15:47.346 sys 0m0.878s 00:15:47.346 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.346 23:00:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.346 ************************************ 00:15:47.346 END TEST raid_state_function_test_sb_4k 00:15:47.346 ************************************ 00:15:47.346 23:00:26 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:47.346 23:00:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:47.346 23:00:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.346 23:00:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.346 ************************************ 00:15:47.346 START TEST raid_superblock_test_4k 00:15:47.346 ************************************ 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98174 00:15:47.346 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98174 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98174 ']' 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.347 23:00:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.347 [2024-11-26 23:00:26.465914] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:47.347 [2024-11-26 23:00:26.466058] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98174 ] 00:15:47.608 [2024-11-26 23:00:26.606910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:47.608 [2024-11-26 23:00:26.646892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.608 [2024-11-26 23:00:26.672124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.608 [2024-11-26 23:00:26.715304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.608 [2024-11-26 23:00:26.715340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 malloc1 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.560 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 [2024-11-26 23:00:27.332727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.560 [2024-11-26 23:00:27.332832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.560 [2024-11-26 23:00:27.332874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.560 [2024-11-26 23:00:27.332901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.560 [2024-11-26 23:00:27.334869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.560 [2024-11-26 23:00:27.334956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.560 pt1 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.561 malloc2 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.561 [2024-11-26 23:00:27.365358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.561 [2024-11-26 23:00:27.365440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.561 [2024-11-26 23:00:27.365473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.561 [2024-11-26 23:00:27.365498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.561 [2024-11-26 23:00:27.367448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.561 [2024-11-26 23:00:27.367514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.561 pt2 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.561 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.561 [2024-11-26 23:00:27.377382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.561 [2024-11-26 23:00:27.379130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.561 [2024-11-26 23:00:27.379279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:48.561 [2024-11-26 23:00:27.379292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:48.561 [2024-11-26 23:00:27.379539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:48.561 [2024-11-26 23:00:27.379674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:48.562 [2024-11-26 23:00:27.379691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:48.562 [2024-11-26 23:00:27.379807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.562 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.562 "name": "raid_bdev1", 00:15:48.562 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:48.562 "strip_size_kb": 0, 00:15:48.562 "state": "online", 00:15:48.562 "raid_level": "raid1", 00:15:48.562 "superblock": true, 00:15:48.562 "num_base_bdevs": 2, 00:15:48.562 "num_base_bdevs_discovered": 2, 00:15:48.562 "num_base_bdevs_operational": 2, 00:15:48.562 "base_bdevs_list": [ 00:15:48.562 { 00:15:48.562 "name": "pt1", 00:15:48.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.562 "is_configured": true, 00:15:48.562 "data_offset": 256, 00:15:48.562 "data_size": 7936 00:15:48.562 }, 00:15:48.562 { 00:15:48.562 "name": "pt2", 00:15:48.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.563 "is_configured": true, 00:15:48.563 "data_offset": 256, 00:15:48.563 "data_size": 7936 00:15:48.563 } 00:15:48.563 ] 00:15:48.563 }' 00:15:48.563 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.563 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.826 [2024-11-26 23:00:27.769740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.826 "name": "raid_bdev1", 00:15:48.826 "aliases": [ 00:15:48.826 "c152438c-24a5-467e-99b7-63b98614ea2d" 00:15:48.826 ], 00:15:48.826 "product_name": "Raid Volume", 00:15:48.826 "block_size": 4096, 00:15:48.826 "num_blocks": 7936, 00:15:48.826 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:48.826 "assigned_rate_limits": { 00:15:48.826 "rw_ios_per_sec": 0, 00:15:48.826 "rw_mbytes_per_sec": 0, 00:15:48.826 "r_mbytes_per_sec": 0, 00:15:48.826 "w_mbytes_per_sec": 0 00:15:48.826 }, 00:15:48.826 "claimed": false, 00:15:48.826 "zoned": false, 00:15:48.826 "supported_io_types": { 00:15:48.826 "read": true, 00:15:48.826 "write": true, 00:15:48.826 "unmap": false, 00:15:48.826 "flush": false, 00:15:48.826 "reset": true, 00:15:48.826 "nvme_admin": false, 00:15:48.826 "nvme_io": false, 00:15:48.826 "nvme_io_md": false, 00:15:48.826 "write_zeroes": true, 00:15:48.826 "zcopy": false, 00:15:48.826 "get_zone_info": false, 00:15:48.826 "zone_management": false, 00:15:48.826 "zone_append": false, 00:15:48.826 "compare": false, 00:15:48.826 "compare_and_write": false, 00:15:48.826 "abort": false, 00:15:48.826 "seek_hole": false, 00:15:48.826 "seek_data": false, 00:15:48.826 "copy": false, 00:15:48.826 "nvme_iov_md": false 00:15:48.826 }, 00:15:48.826 "memory_domains": [ 00:15:48.826 { 00:15:48.826 "dma_device_id": "system", 00:15:48.826 "dma_device_type": 1 00:15:48.826 }, 00:15:48.826 { 00:15:48.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.826 "dma_device_type": 2 00:15:48.826 }, 00:15:48.826 { 00:15:48.826 "dma_device_id": "system", 00:15:48.826 "dma_device_type": 1 00:15:48.826 }, 00:15:48.826 { 00:15:48.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.826 "dma_device_type": 2 00:15:48.826 } 00:15:48.826 ], 00:15:48.826 "driver_specific": { 00:15:48.826 "raid": { 00:15:48.826 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:48.826 "strip_size_kb": 0, 00:15:48.826 "state": "online", 00:15:48.826 "raid_level": "raid1", 00:15:48.826 "superblock": true, 00:15:48.826 "num_base_bdevs": 2, 00:15:48.826 "num_base_bdevs_discovered": 2, 00:15:48.826 "num_base_bdevs_operational": 2, 00:15:48.826 "base_bdevs_list": [ 00:15:48.826 { 00:15:48.826 "name": "pt1", 00:15:48.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.826 "is_configured": true, 00:15:48.826 "data_offset": 256, 00:15:48.826 "data_size": 7936 00:15:48.826 }, 00:15:48.826 { 00:15:48.826 "name": "pt2", 00:15:48.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.826 "is_configured": true, 00:15:48.826 "data_offset": 256, 00:15:48.826 "data_size": 7936 00:15:48.826 } 00:15:48.826 ] 00:15:48.826 } 00:15:48.826 } 00:15:48.826 }' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:48.826 pt2' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.826 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 [2024-11-26 23:00:27.965726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.087 23:00:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c152438c-24a5-467e-99b7-63b98614ea2d 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c152438c-24a5-467e-99b7-63b98614ea2d ']' 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 [2024-11-26 23:00:28.009519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.087 [2024-11-26 23:00:28.009540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.087 [2024-11-26 23:00:28.009599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.087 [2024-11-26 23:00:28.009654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.087 [2024-11-26 23:00:28.009673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 [2024-11-26 23:00:28.149572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:49.088 [2024-11-26 23:00:28.151413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:49.088 [2024-11-26 23:00:28.151502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:49.088 [2024-11-26 23:00:28.151573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:49.088 [2024-11-26 23:00:28.151610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.088 [2024-11-26 23:00:28.151646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:49.088 request: 00:15:49.088 { 00:15:49.088 "name": "raid_bdev1", 00:15:49.088 "raid_level": "raid1", 00:15:49.088 "base_bdevs": [ 00:15:49.088 "malloc1", 00:15:49.088 "malloc2" 00:15:49.088 ], 00:15:49.088 "superblock": false, 00:15:49.088 "method": "bdev_raid_create", 00:15:49.088 "req_id": 1 00:15:49.088 } 00:15:49.088 Got JSON-RPC error response 00:15:49.088 response: 00:15:49.088 { 00:15:49.088 "code": -17, 00:15:49.088 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:49.088 } 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.088 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 [2024-11-26 23:00:28.209592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.088 [2024-11-26 23:00:28.209690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.088 [2024-11-26 23:00:28.209718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.088 [2024-11-26 23:00:28.209749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.088 [2024-11-26 23:00:28.211821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.088 [2024-11-26 23:00:28.211891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.088 [2024-11-26 23:00:28.211964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.088 [2024-11-26 23:00:28.212017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.349 pt1 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.349 "name": "raid_bdev1", 00:15:49.349 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:49.349 "strip_size_kb": 0, 00:15:49.349 "state": "configuring", 00:15:49.349 "raid_level": "raid1", 00:15:49.349 "superblock": true, 00:15:49.349 "num_base_bdevs": 2, 00:15:49.349 "num_base_bdevs_discovered": 1, 00:15:49.349 "num_base_bdevs_operational": 2, 00:15:49.349 "base_bdevs_list": [ 00:15:49.349 { 00:15:49.349 "name": "pt1", 00:15:49.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.349 "is_configured": true, 00:15:49.349 "data_offset": 256, 00:15:49.349 "data_size": 7936 00:15:49.349 }, 00:15:49.349 { 00:15:49.349 "name": null, 00:15:49.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.349 "is_configured": false, 00:15:49.349 "data_offset": 256, 00:15:49.349 "data_size": 7936 00:15:49.349 } 00:15:49.349 ] 00:15:49.349 }' 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.349 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.610 [2024-11-26 23:00:28.617675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.610 [2024-11-26 23:00:28.617768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.610 [2024-11-26 23:00:28.617787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:49.610 [2024-11-26 23:00:28.617796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.610 [2024-11-26 23:00:28.618098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.610 [2024-11-26 23:00:28.618125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.610 [2024-11-26 23:00:28.618175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.610 [2024-11-26 23:00:28.618193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.610 [2024-11-26 23:00:28.618276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:49.610 [2024-11-26 23:00:28.618287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:49.610 [2024-11-26 23:00:28.618507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:49.610 [2024-11-26 23:00:28.618614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:49.610 [2024-11-26 23:00:28.618622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:49.610 [2024-11-26 23:00:28.618737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.610 pt2 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.610 "name": "raid_bdev1", 00:15:49.610 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:49.610 "strip_size_kb": 0, 00:15:49.610 "state": "online", 00:15:49.610 "raid_level": "raid1", 00:15:49.610 "superblock": true, 00:15:49.610 "num_base_bdevs": 2, 00:15:49.610 "num_base_bdevs_discovered": 2, 00:15:49.610 "num_base_bdevs_operational": 2, 00:15:49.610 "base_bdevs_list": [ 00:15:49.610 { 00:15:49.610 "name": "pt1", 00:15:49.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.610 "is_configured": true, 00:15:49.610 "data_offset": 256, 00:15:49.610 "data_size": 7936 00:15:49.610 }, 00:15:49.610 { 00:15:49.610 "name": "pt2", 00:15:49.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.610 "is_configured": true, 00:15:49.610 "data_offset": 256, 00:15:49.610 "data_size": 7936 00:15:49.610 } 00:15:49.610 ] 00:15:49.610 }' 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.610 23:00:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.181 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.181 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.181 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.182 [2024-11-26 23:00:29.082001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.182 "name": "raid_bdev1", 00:15:50.182 "aliases": [ 00:15:50.182 "c152438c-24a5-467e-99b7-63b98614ea2d" 00:15:50.182 ], 00:15:50.182 "product_name": "Raid Volume", 00:15:50.182 "block_size": 4096, 00:15:50.182 "num_blocks": 7936, 00:15:50.182 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:50.182 "assigned_rate_limits": { 00:15:50.182 "rw_ios_per_sec": 0, 00:15:50.182 "rw_mbytes_per_sec": 0, 00:15:50.182 "r_mbytes_per_sec": 0, 00:15:50.182 "w_mbytes_per_sec": 0 00:15:50.182 }, 00:15:50.182 "claimed": false, 00:15:50.182 "zoned": false, 00:15:50.182 "supported_io_types": { 00:15:50.182 "read": true, 00:15:50.182 "write": true, 00:15:50.182 "unmap": false, 00:15:50.182 "flush": false, 00:15:50.182 "reset": true, 00:15:50.182 "nvme_admin": false, 00:15:50.182 "nvme_io": false, 00:15:50.182 "nvme_io_md": false, 00:15:50.182 "write_zeroes": true, 00:15:50.182 "zcopy": false, 00:15:50.182 "get_zone_info": false, 00:15:50.182 "zone_management": false, 00:15:50.182 "zone_append": false, 00:15:50.182 "compare": false, 00:15:50.182 "compare_and_write": false, 00:15:50.182 "abort": false, 00:15:50.182 "seek_hole": false, 00:15:50.182 "seek_data": false, 00:15:50.182 "copy": false, 00:15:50.182 "nvme_iov_md": false 00:15:50.182 }, 00:15:50.182 "memory_domains": [ 00:15:50.182 { 00:15:50.182 "dma_device_id": "system", 00:15:50.182 "dma_device_type": 1 00:15:50.182 }, 00:15:50.182 { 00:15:50.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.182 "dma_device_type": 2 00:15:50.182 }, 00:15:50.182 { 00:15:50.182 "dma_device_id": "system", 00:15:50.182 "dma_device_type": 1 00:15:50.182 }, 00:15:50.182 { 00:15:50.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.182 "dma_device_type": 2 00:15:50.182 } 00:15:50.182 ], 00:15:50.182 "driver_specific": { 00:15:50.182 "raid": { 00:15:50.182 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:50.182 "strip_size_kb": 0, 00:15:50.182 "state": "online", 00:15:50.182 "raid_level": "raid1", 00:15:50.182 "superblock": true, 00:15:50.182 "num_base_bdevs": 2, 00:15:50.182 "num_base_bdevs_discovered": 2, 00:15:50.182 "num_base_bdevs_operational": 2, 00:15:50.182 "base_bdevs_list": [ 00:15:50.182 { 00:15:50.182 "name": "pt1", 00:15:50.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.182 "is_configured": true, 00:15:50.182 "data_offset": 256, 00:15:50.182 "data_size": 7936 00:15:50.182 }, 00:15:50.182 { 00:15:50.182 "name": "pt2", 00:15:50.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.182 "is_configured": true, 00:15:50.182 "data_offset": 256, 00:15:50.182 "data_size": 7936 00:15:50.182 } 00:15:50.182 ] 00:15:50.182 } 00:15:50.182 } 00:15:50.182 }' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.182 pt2' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.182 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.442 [2024-11-26 23:00:29.318073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c152438c-24a5-467e-99b7-63b98614ea2d '!=' c152438c-24a5-467e-99b7-63b98614ea2d ']' 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.442 [2024-11-26 23:00:29.365873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.442 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.443 "name": "raid_bdev1", 00:15:50.443 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:50.443 "strip_size_kb": 0, 00:15:50.443 "state": "online", 00:15:50.443 "raid_level": "raid1", 00:15:50.443 "superblock": true, 00:15:50.443 "num_base_bdevs": 2, 00:15:50.443 "num_base_bdevs_discovered": 1, 00:15:50.443 "num_base_bdevs_operational": 1, 00:15:50.443 "base_bdevs_list": [ 00:15:50.443 { 00:15:50.443 "name": null, 00:15:50.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.443 "is_configured": false, 00:15:50.443 "data_offset": 0, 00:15:50.443 "data_size": 7936 00:15:50.443 }, 00:15:50.443 { 00:15:50.443 "name": "pt2", 00:15:50.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.443 "is_configured": true, 00:15:50.443 "data_offset": 256, 00:15:50.443 "data_size": 7936 00:15:50.443 } 00:15:50.443 ] 00:15:50.443 }' 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.443 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.701 [2024-11-26 23:00:29.797969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.701 [2024-11-26 23:00:29.797990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.701 [2024-11-26 23:00:29.798039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.701 [2024-11-26 23:00:29.798071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.701 [2024-11-26 23:00:29.798081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.701 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.960 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.961 [2024-11-26 23:00:29.873993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.961 [2024-11-26 23:00:29.874078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.961 [2024-11-26 23:00:29.874105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:50.961 [2024-11-26 23:00:29.874134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.961 [2024-11-26 23:00:29.876188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.961 [2024-11-26 23:00:29.876277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.961 [2024-11-26 23:00:29.876354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.961 [2024-11-26 23:00:29.876401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.961 [2024-11-26 23:00:29.876517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:50.961 [2024-11-26 23:00:29.876555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:50.961 [2024-11-26 23:00:29.876764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:50.961 [2024-11-26 23:00:29.876908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:50.961 [2024-11-26 23:00:29.876947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:50.961 [2024-11-26 23:00:29.877069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.961 pt2 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.961 "name": "raid_bdev1", 00:15:50.961 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:50.961 "strip_size_kb": 0, 00:15:50.961 "state": "online", 00:15:50.961 "raid_level": "raid1", 00:15:50.961 "superblock": true, 00:15:50.961 "num_base_bdevs": 2, 00:15:50.961 "num_base_bdevs_discovered": 1, 00:15:50.961 "num_base_bdevs_operational": 1, 00:15:50.961 "base_bdevs_list": [ 00:15:50.961 { 00:15:50.961 "name": null, 00:15:50.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.961 "is_configured": false, 00:15:50.961 "data_offset": 256, 00:15:50.961 "data_size": 7936 00:15:50.961 }, 00:15:50.961 { 00:15:50.961 "name": "pt2", 00:15:50.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.961 "is_configured": true, 00:15:50.961 "data_offset": 256, 00:15:50.961 "data_size": 7936 00:15:50.961 } 00:15:50.961 ] 00:15:50.961 }' 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.961 23:00:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 [2024-11-26 23:00:30.298112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.220 [2024-11-26 23:00:30.298136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.220 [2024-11-26 23:00:30.298178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.220 [2024-11-26 23:00:30.298212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.220 [2024-11-26 23:00:30.298219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.481 [2024-11-26 23:00:30.358130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.481 [2024-11-26 23:00:30.358171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.481 [2024-11-26 23:00:30.358187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:51.481 [2024-11-26 23:00:30.358195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.481 [2024-11-26 23:00:30.360242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.481 [2024-11-26 23:00:30.360332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.481 [2024-11-26 23:00:30.360395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.481 [2024-11-26 23:00:30.360419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.481 [2024-11-26 23:00:30.360522] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.481 [2024-11-26 23:00:30.360533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.481 [2024-11-26 23:00:30.360548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:51.481 [2024-11-26 23:00:30.360582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.481 [2024-11-26 23:00:30.360637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:51.481 [2024-11-26 23:00:30.360644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.481 [2024-11-26 23:00:30.360841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.481 [2024-11-26 23:00:30.360947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:51.481 [2024-11-26 23:00:30.360959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:51.481 [2024-11-26 23:00:30.361051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.481 pt1 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.481 "name": "raid_bdev1", 00:15:51.481 "uuid": "c152438c-24a5-467e-99b7-63b98614ea2d", 00:15:51.481 "strip_size_kb": 0, 00:15:51.481 "state": "online", 00:15:51.481 "raid_level": "raid1", 00:15:51.481 "superblock": true, 00:15:51.481 "num_base_bdevs": 2, 00:15:51.481 "num_base_bdevs_discovered": 1, 00:15:51.481 "num_base_bdevs_operational": 1, 00:15:51.481 "base_bdevs_list": [ 00:15:51.481 { 00:15:51.481 "name": null, 00:15:51.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.481 "is_configured": false, 00:15:51.481 "data_offset": 256, 00:15:51.481 "data_size": 7936 00:15:51.481 }, 00:15:51.481 { 00:15:51.481 "name": "pt2", 00:15:51.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.481 "is_configured": true, 00:15:51.481 "data_offset": 256, 00:15:51.481 "data_size": 7936 00:15:51.481 } 00:15:51.481 ] 00:15:51.481 }' 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.481 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.741 [2024-11-26 23:00:30.770435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c152438c-24a5-467e-99b7-63b98614ea2d '!=' c152438c-24a5-467e-99b7-63b98614ea2d ']' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98174 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98174 ']' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98174 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98174 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98174' 00:15:51.741 killing process with pid 98174 00:15:51.741 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98174 00:15:51.741 [2024-11-26 23:00:30.828500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.741 [2024-11-26 23:00:30.828594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.741 [2024-11-26 23:00:30.828649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.741 [2024-11-26 23:00:30.828699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta 23:00:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98174 00:15:51.741 te offline 00:15:51.741 [2024-11-26 23:00:30.851136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.002 23:00:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:52.002 00:15:52.002 real 0m4.714s 00:15:52.002 user 0m7.671s 00:15:52.002 sys 0m1.056s 00:15:52.002 23:00:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.002 23:00:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.002 ************************************ 00:15:52.002 END TEST raid_superblock_test_4k 00:15:52.002 ************************************ 00:15:52.263 23:00:31 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:52.263 23:00:31 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:52.263 23:00:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:52.263 23:00:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.263 23:00:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.263 ************************************ 00:15:52.263 START TEST raid_rebuild_test_sb_4k 00:15:52.263 ************************************ 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98485 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98485 00:15:52.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98485 ']' 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.263 23:00:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.263 [2024-11-26 23:00:31.270131] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:52.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:52.263 Zero copy mechanism will not be used. 00:15:52.263 [2024-11-26 23:00:31.270342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98485 ] 00:15:52.523 [2024-11-26 23:00:31.411468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:52.523 [2024-11-26 23:00:31.450447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.523 [2024-11-26 23:00:31.477687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.523 [2024-11-26 23:00:31.521809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.524 [2024-11-26 23:00:31.521851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.094 BaseBdev1_malloc 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.094 [2024-11-26 23:00:32.098670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.094 [2024-11-26 23:00:32.098748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.094 [2024-11-26 23:00:32.098773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.094 [2024-11-26 23:00:32.098787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.094 [2024-11-26 23:00:32.100847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.094 [2024-11-26 23:00:32.100934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.094 BaseBdev1 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.094 BaseBdev2_malloc 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.094 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.094 [2024-11-26 23:00:32.127359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:53.094 [2024-11-26 23:00:32.127451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.094 [2024-11-26 23:00:32.127472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.094 [2024-11-26 23:00:32.127482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.094 [2024-11-26 23:00:32.129400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.094 [2024-11-26 23:00:32.129437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.094 BaseBdev2 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 spare_malloc 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 spare_delay 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 [2024-11-26 23:00:32.167805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.095 [2024-11-26 23:00:32.167857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.095 [2024-11-26 23:00:32.167876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:53.095 [2024-11-26 23:00:32.167888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.095 [2024-11-26 23:00:32.169813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.095 [2024-11-26 23:00:32.169851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.095 spare 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 [2024-11-26 23:00:32.179864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.095 [2024-11-26 23:00:32.181581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.095 [2024-11-26 23:00:32.181714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:53.095 [2024-11-26 23:00:32.181728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.095 [2024-11-26 23:00:32.181946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:53.095 [2024-11-26 23:00:32.182075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:53.095 [2024-11-26 23:00:32.182084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:53.095 [2024-11-26 23:00:32.182191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.359 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.359 "name": "raid_bdev1", 00:15:53.359 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:53.359 "strip_size_kb": 0, 00:15:53.359 "state": "online", 00:15:53.359 "raid_level": "raid1", 00:15:53.359 "superblock": true, 00:15:53.359 "num_base_bdevs": 2, 00:15:53.359 "num_base_bdevs_discovered": 2, 00:15:53.360 "num_base_bdevs_operational": 2, 00:15:53.360 "base_bdevs_list": [ 00:15:53.360 { 00:15:53.360 "name": "BaseBdev1", 00:15:53.360 "uuid": "822dbdc7-40cd-5493-8648-fc105957c209", 00:15:53.360 "is_configured": true, 00:15:53.360 "data_offset": 256, 00:15:53.360 "data_size": 7936 00:15:53.360 }, 00:15:53.360 { 00:15:53.360 "name": "BaseBdev2", 00:15:53.360 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:53.360 "is_configured": true, 00:15:53.360 "data_offset": 256, 00:15:53.360 "data_size": 7936 00:15:53.360 } 00:15:53.360 ] 00:15:53.360 }' 00:15:53.360 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.360 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.620 [2024-11-26 23:00:32.596184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:53.620 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.621 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:53.881 [2024-11-26 23:00:32.844072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:53.881 /dev/nbd0 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.881 1+0 records in 00:15:53.881 1+0 records out 00:15:53.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481214 s, 8.5 MB/s 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:53.881 23:00:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:54.452 7936+0 records in 00:15:54.452 7936+0 records out 00:15:54.452 32505856 bytes (33 MB, 31 MiB) copied, 0.612669 s, 53.1 MB/s 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.452 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:54.718 [2024-11-26 23:00:33.756243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.718 [2024-11-26 23:00:33.788707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.718 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.977 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.977 "name": "raid_bdev1", 00:15:54.977 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:54.977 "strip_size_kb": 0, 00:15:54.977 "state": "online", 00:15:54.977 "raid_level": "raid1", 00:15:54.977 "superblock": true, 00:15:54.977 "num_base_bdevs": 2, 00:15:54.977 "num_base_bdevs_discovered": 1, 00:15:54.977 "num_base_bdevs_operational": 1, 00:15:54.977 "base_bdevs_list": [ 00:15:54.977 { 00:15:54.977 "name": null, 00:15:54.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.977 "is_configured": false, 00:15:54.977 "data_offset": 0, 00:15:54.977 "data_size": 7936 00:15:54.977 }, 00:15:54.977 { 00:15:54.977 "name": "BaseBdev2", 00:15:54.977 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:54.977 "is_configured": true, 00:15:54.977 "data_offset": 256, 00:15:54.977 "data_size": 7936 00:15:54.977 } 00:15:54.977 ] 00:15:54.977 }' 00:15:54.977 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.977 23:00:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.238 23:00:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.238 23:00:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.238 23:00:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.238 [2024-11-26 23:00:34.268820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.238 [2024-11-26 23:00:34.273800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:15:55.238 23:00:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.238 23:00:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.238 [2024-11-26 23:00:34.275735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.178 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.437 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.437 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.437 "name": "raid_bdev1", 00:15:56.437 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:56.437 "strip_size_kb": 0, 00:15:56.437 "state": "online", 00:15:56.437 "raid_level": "raid1", 00:15:56.437 "superblock": true, 00:15:56.437 "num_base_bdevs": 2, 00:15:56.437 "num_base_bdevs_discovered": 2, 00:15:56.437 "num_base_bdevs_operational": 2, 00:15:56.437 "process": { 00:15:56.437 "type": "rebuild", 00:15:56.437 "target": "spare", 00:15:56.437 "progress": { 00:15:56.437 "blocks": 2560, 00:15:56.437 "percent": 32 00:15:56.437 } 00:15:56.437 }, 00:15:56.437 "base_bdevs_list": [ 00:15:56.437 { 00:15:56.437 "name": "spare", 00:15:56.437 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:15:56.437 "is_configured": true, 00:15:56.437 "data_offset": 256, 00:15:56.437 "data_size": 7936 00:15:56.437 }, 00:15:56.437 { 00:15:56.437 "name": "BaseBdev2", 00:15:56.437 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:56.437 "is_configured": true, 00:15:56.437 "data_offset": 256, 00:15:56.437 "data_size": 7936 00:15:56.437 } 00:15:56.437 ] 00:15:56.437 }' 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.438 [2024-11-26 23:00:35.427069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.438 [2024-11-26 23:00:35.482375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.438 [2024-11-26 23:00:35.482437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.438 [2024-11-26 23:00:35.482450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.438 [2024-11-26 23:00:35.482459] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.438 "name": "raid_bdev1", 00:15:56.438 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:56.438 "strip_size_kb": 0, 00:15:56.438 "state": "online", 00:15:56.438 "raid_level": "raid1", 00:15:56.438 "superblock": true, 00:15:56.438 "num_base_bdevs": 2, 00:15:56.438 "num_base_bdevs_discovered": 1, 00:15:56.438 "num_base_bdevs_operational": 1, 00:15:56.438 "base_bdevs_list": [ 00:15:56.438 { 00:15:56.438 "name": null, 00:15:56.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.438 "is_configured": false, 00:15:56.438 "data_offset": 0, 00:15:56.438 "data_size": 7936 00:15:56.438 }, 00:15:56.438 { 00:15:56.438 "name": "BaseBdev2", 00:15:56.438 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:56.438 "is_configured": true, 00:15:56.438 "data_offset": 256, 00:15:56.438 "data_size": 7936 00:15:56.438 } 00:15:56.438 ] 00:15:56.438 }' 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.438 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.007 "name": "raid_bdev1", 00:15:57.007 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:57.007 "strip_size_kb": 0, 00:15:57.007 "state": "online", 00:15:57.007 "raid_level": "raid1", 00:15:57.007 "superblock": true, 00:15:57.007 "num_base_bdevs": 2, 00:15:57.007 "num_base_bdevs_discovered": 1, 00:15:57.007 "num_base_bdevs_operational": 1, 00:15:57.007 "base_bdevs_list": [ 00:15:57.007 { 00:15:57.007 "name": null, 00:15:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.007 "is_configured": false, 00:15:57.007 "data_offset": 0, 00:15:57.007 "data_size": 7936 00:15:57.007 }, 00:15:57.007 { 00:15:57.007 "name": "BaseBdev2", 00:15:57.007 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:57.007 "is_configured": true, 00:15:57.007 "data_offset": 256, 00:15:57.007 "data_size": 7936 00:15:57.007 } 00:15:57.007 ] 00:15:57.007 }' 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.007 23:00:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.007 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.007 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.007 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.008 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.008 [2024-11-26 23:00:36.023185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.008 [2024-11-26 23:00:36.027353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:15:57.008 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.008 23:00:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.008 [2024-11-26 23:00:36.029178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.951 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.228 "name": "raid_bdev1", 00:15:58.228 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:58.228 "strip_size_kb": 0, 00:15:58.228 "state": "online", 00:15:58.228 "raid_level": "raid1", 00:15:58.228 "superblock": true, 00:15:58.228 "num_base_bdevs": 2, 00:15:58.228 "num_base_bdevs_discovered": 2, 00:15:58.228 "num_base_bdevs_operational": 2, 00:15:58.228 "process": { 00:15:58.228 "type": "rebuild", 00:15:58.228 "target": "spare", 00:15:58.228 "progress": { 00:15:58.228 "blocks": 2560, 00:15:58.228 "percent": 32 00:15:58.228 } 00:15:58.228 }, 00:15:58.228 "base_bdevs_list": [ 00:15:58.228 { 00:15:58.228 "name": "spare", 00:15:58.228 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:15:58.228 "is_configured": true, 00:15:58.228 "data_offset": 256, 00:15:58.228 "data_size": 7936 00:15:58.228 }, 00:15:58.228 { 00:15:58.228 "name": "BaseBdev2", 00:15:58.228 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:58.228 "is_configured": true, 00:15:58.228 "data_offset": 256, 00:15:58.228 "data_size": 7936 00:15:58.228 } 00:15:58.228 ] 00:15:58.228 }' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:58.228 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=566 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.228 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.228 "name": "raid_bdev1", 00:15:58.228 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:58.228 "strip_size_kb": 0, 00:15:58.228 "state": "online", 00:15:58.228 "raid_level": "raid1", 00:15:58.228 "superblock": true, 00:15:58.228 "num_base_bdevs": 2, 00:15:58.228 "num_base_bdevs_discovered": 2, 00:15:58.228 "num_base_bdevs_operational": 2, 00:15:58.228 "process": { 00:15:58.228 "type": "rebuild", 00:15:58.228 "target": "spare", 00:15:58.228 "progress": { 00:15:58.228 "blocks": 2816, 00:15:58.228 "percent": 35 00:15:58.228 } 00:15:58.228 }, 00:15:58.228 "base_bdevs_list": [ 00:15:58.228 { 00:15:58.228 "name": "spare", 00:15:58.228 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:15:58.229 "is_configured": true, 00:15:58.229 "data_offset": 256, 00:15:58.229 "data_size": 7936 00:15:58.229 }, 00:15:58.229 { 00:15:58.229 "name": "BaseBdev2", 00:15:58.229 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:58.229 "is_configured": true, 00:15:58.229 "data_offset": 256, 00:15:58.229 "data_size": 7936 00:15:58.229 } 00:15:58.229 ] 00:15:58.229 }' 00:15:58.229 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.229 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.229 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.229 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.229 23:00:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.189 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.449 "name": "raid_bdev1", 00:15:59.449 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:15:59.449 "strip_size_kb": 0, 00:15:59.449 "state": "online", 00:15:59.449 "raid_level": "raid1", 00:15:59.449 "superblock": true, 00:15:59.449 "num_base_bdevs": 2, 00:15:59.449 "num_base_bdevs_discovered": 2, 00:15:59.449 "num_base_bdevs_operational": 2, 00:15:59.449 "process": { 00:15:59.449 "type": "rebuild", 00:15:59.449 "target": "spare", 00:15:59.449 "progress": { 00:15:59.449 "blocks": 5632, 00:15:59.449 "percent": 70 00:15:59.449 } 00:15:59.449 }, 00:15:59.449 "base_bdevs_list": [ 00:15:59.449 { 00:15:59.449 "name": "spare", 00:15:59.449 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:15:59.449 "is_configured": true, 00:15:59.449 "data_offset": 256, 00:15:59.449 "data_size": 7936 00:15:59.449 }, 00:15:59.449 { 00:15:59.449 "name": "BaseBdev2", 00:15:59.449 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:15:59.449 "is_configured": true, 00:15:59.449 "data_offset": 256, 00:15:59.449 "data_size": 7936 00:15:59.449 } 00:15:59.449 ] 00:15:59.449 }' 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.449 23:00:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.388 [2024-11-26 23:00:39.144772] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.389 [2024-11-26 23:00:39.144839] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.389 [2024-11-26 23:00:39.144931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.389 "name": "raid_bdev1", 00:16:00.389 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:00.389 "strip_size_kb": 0, 00:16:00.389 "state": "online", 00:16:00.389 "raid_level": "raid1", 00:16:00.389 "superblock": true, 00:16:00.389 "num_base_bdevs": 2, 00:16:00.389 "num_base_bdevs_discovered": 2, 00:16:00.389 "num_base_bdevs_operational": 2, 00:16:00.389 "base_bdevs_list": [ 00:16:00.389 { 00:16:00.389 "name": "spare", 00:16:00.389 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:00.389 "is_configured": true, 00:16:00.389 "data_offset": 256, 00:16:00.389 "data_size": 7936 00:16:00.389 }, 00:16:00.389 { 00:16:00.389 "name": "BaseBdev2", 00:16:00.389 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:00.389 "is_configured": true, 00:16:00.389 "data_offset": 256, 00:16:00.389 "data_size": 7936 00:16:00.389 } 00:16:00.389 ] 00:16:00.389 }' 00:16:00.389 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.649 "name": "raid_bdev1", 00:16:00.649 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:00.649 "strip_size_kb": 0, 00:16:00.649 "state": "online", 00:16:00.649 "raid_level": "raid1", 00:16:00.649 "superblock": true, 00:16:00.649 "num_base_bdevs": 2, 00:16:00.649 "num_base_bdevs_discovered": 2, 00:16:00.649 "num_base_bdevs_operational": 2, 00:16:00.649 "base_bdevs_list": [ 00:16:00.649 { 00:16:00.649 "name": "spare", 00:16:00.649 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 256, 00:16:00.649 "data_size": 7936 00:16:00.649 }, 00:16:00.649 { 00:16:00.649 "name": "BaseBdev2", 00:16:00.649 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 256, 00:16:00.649 "data_size": 7936 00:16:00.649 } 00:16:00.649 ] 00:16:00.649 }' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.649 "name": "raid_bdev1", 00:16:00.649 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:00.649 "strip_size_kb": 0, 00:16:00.649 "state": "online", 00:16:00.649 "raid_level": "raid1", 00:16:00.649 "superblock": true, 00:16:00.649 "num_base_bdevs": 2, 00:16:00.649 "num_base_bdevs_discovered": 2, 00:16:00.649 "num_base_bdevs_operational": 2, 00:16:00.649 "base_bdevs_list": [ 00:16:00.649 { 00:16:00.649 "name": "spare", 00:16:00.649 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 256, 00:16:00.649 "data_size": 7936 00:16:00.649 }, 00:16:00.649 { 00:16:00.649 "name": "BaseBdev2", 00:16:00.649 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 256, 00:16:00.649 "data_size": 7936 00:16:00.649 } 00:16:00.649 ] 00:16:00.649 }' 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.649 23:00:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.219 [2024-11-26 23:00:40.145168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.219 [2024-11-26 23:00:40.145241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.219 [2024-11-26 23:00:40.145343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.219 [2024-11-26 23:00:40.145419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.219 [2024-11-26 23:00:40.145451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.219 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:01.220 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.220 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:01.220 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.220 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.220 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:01.479 /dev/nbd0 00:16:01.479 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.479 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.479 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.479 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.480 1+0 records in 00:16:01.480 1+0 records out 00:16:01.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461239 s, 8.9 MB/s 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.480 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:01.740 /dev/nbd1 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.740 1+0 records in 00:16:01.740 1+0 records out 00:16:01.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495058 s, 8.3 MB/s 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.740 23:00:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.999 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.259 [2024-11-26 23:00:41.288587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.259 [2024-11-26 23:00:41.288641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.259 [2024-11-26 23:00:41.288665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:02.259 [2024-11-26 23:00:41.288674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.259 [2024-11-26 23:00:41.290612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.259 [2024-11-26 23:00:41.290649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.259 [2024-11-26 23:00:41.290751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:02.259 [2024-11-26 23:00:41.290787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.259 [2024-11-26 23:00:41.290895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.259 spare 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.259 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.519 [2024-11-26 23:00:41.390952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.519 [2024-11-26 23:00:41.390987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:02.519 [2024-11-26 23:00:41.391244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:02.519 [2024-11-26 23:00:41.391408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.519 [2024-11-26 23:00:41.391450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.519 [2024-11-26 23:00:41.391575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.519 "name": "raid_bdev1", 00:16:02.519 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:02.519 "strip_size_kb": 0, 00:16:02.519 "state": "online", 00:16:02.519 "raid_level": "raid1", 00:16:02.519 "superblock": true, 00:16:02.519 "num_base_bdevs": 2, 00:16:02.519 "num_base_bdevs_discovered": 2, 00:16:02.519 "num_base_bdevs_operational": 2, 00:16:02.519 "base_bdevs_list": [ 00:16:02.519 { 00:16:02.519 "name": "spare", 00:16:02.519 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:02.519 "is_configured": true, 00:16:02.519 "data_offset": 256, 00:16:02.519 "data_size": 7936 00:16:02.519 }, 00:16:02.519 { 00:16:02.519 "name": "BaseBdev2", 00:16:02.519 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:02.519 "is_configured": true, 00:16:02.519 "data_offset": 256, 00:16:02.519 "data_size": 7936 00:16:02.519 } 00:16:02.519 ] 00:16:02.519 }' 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.519 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.779 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.779 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.780 "name": "raid_bdev1", 00:16:02.780 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:02.780 "strip_size_kb": 0, 00:16:02.780 "state": "online", 00:16:02.780 "raid_level": "raid1", 00:16:02.780 "superblock": true, 00:16:02.780 "num_base_bdevs": 2, 00:16:02.780 "num_base_bdevs_discovered": 2, 00:16:02.780 "num_base_bdevs_operational": 2, 00:16:02.780 "base_bdevs_list": [ 00:16:02.780 { 00:16:02.780 "name": "spare", 00:16:02.780 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:02.780 "is_configured": true, 00:16:02.780 "data_offset": 256, 00:16:02.780 "data_size": 7936 00:16:02.780 }, 00:16:02.780 { 00:16:02.780 "name": "BaseBdev2", 00:16:02.780 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:02.780 "is_configured": true, 00:16:02.780 "data_offset": 256, 00:16:02.780 "data_size": 7936 00:16:02.780 } 00:16:02.780 ] 00:16:02.780 }' 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.780 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.040 [2024-11-26 23:00:41.988796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.040 23:00:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.040 "name": "raid_bdev1", 00:16:03.040 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:03.040 "strip_size_kb": 0, 00:16:03.040 "state": "online", 00:16:03.040 "raid_level": "raid1", 00:16:03.040 "superblock": true, 00:16:03.040 "num_base_bdevs": 2, 00:16:03.040 "num_base_bdevs_discovered": 1, 00:16:03.040 "num_base_bdevs_operational": 1, 00:16:03.040 "base_bdevs_list": [ 00:16:03.040 { 00:16:03.040 "name": null, 00:16:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.040 "is_configured": false, 00:16:03.040 "data_offset": 0, 00:16:03.040 "data_size": 7936 00:16:03.040 }, 00:16:03.040 { 00:16:03.040 "name": "BaseBdev2", 00:16:03.040 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:03.040 "is_configured": true, 00:16:03.040 "data_offset": 256, 00:16:03.040 "data_size": 7936 00:16:03.040 } 00:16:03.040 ] 00:16:03.040 }' 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.040 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.609 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.609 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.609 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.609 [2024-11-26 23:00:42.452952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.610 [2024-11-26 23:00:42.453133] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.610 [2024-11-26 23:00:42.453195] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:03.610 [2024-11-26 23:00:42.453246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.610 [2024-11-26 23:00:42.457986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:03.610 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.610 23:00:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:03.610 [2024-11-26 23:00:42.459831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.547 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.547 "name": "raid_bdev1", 00:16:04.547 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:04.547 "strip_size_kb": 0, 00:16:04.547 "state": "online", 00:16:04.547 "raid_level": "raid1", 00:16:04.547 "superblock": true, 00:16:04.547 "num_base_bdevs": 2, 00:16:04.547 "num_base_bdevs_discovered": 2, 00:16:04.547 "num_base_bdevs_operational": 2, 00:16:04.547 "process": { 00:16:04.547 "type": "rebuild", 00:16:04.547 "target": "spare", 00:16:04.547 "progress": { 00:16:04.547 "blocks": 2560, 00:16:04.547 "percent": 32 00:16:04.547 } 00:16:04.547 }, 00:16:04.547 "base_bdevs_list": [ 00:16:04.547 { 00:16:04.547 "name": "spare", 00:16:04.547 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:04.547 "is_configured": true, 00:16:04.547 "data_offset": 256, 00:16:04.547 "data_size": 7936 00:16:04.547 }, 00:16:04.547 { 00:16:04.547 "name": "BaseBdev2", 00:16:04.547 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:04.547 "is_configured": true, 00:16:04.547 "data_offset": 256, 00:16:04.547 "data_size": 7936 00:16:04.547 } 00:16:04.547 ] 00:16:04.548 }' 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.548 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.548 [2024-11-26 23:00:43.614899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.548 [2024-11-26 23:00:43.665833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.548 [2024-11-26 23:00:43.665891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.548 [2024-11-26 23:00:43.665905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.548 [2024-11-26 23:00:43.665913] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.807 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.807 "name": "raid_bdev1", 00:16:04.807 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:04.808 "strip_size_kb": 0, 00:16:04.808 "state": "online", 00:16:04.808 "raid_level": "raid1", 00:16:04.808 "superblock": true, 00:16:04.808 "num_base_bdevs": 2, 00:16:04.808 "num_base_bdevs_discovered": 1, 00:16:04.808 "num_base_bdevs_operational": 1, 00:16:04.808 "base_bdevs_list": [ 00:16:04.808 { 00:16:04.808 "name": null, 00:16:04.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.808 "is_configured": false, 00:16:04.808 "data_offset": 0, 00:16:04.808 "data_size": 7936 00:16:04.808 }, 00:16:04.808 { 00:16:04.808 "name": "BaseBdev2", 00:16:04.808 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:04.808 "is_configured": true, 00:16:04.808 "data_offset": 256, 00:16:04.808 "data_size": 7936 00:16:04.808 } 00:16:04.808 ] 00:16:04.808 }' 00:16:04.808 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.808 23:00:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.068 23:00:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.068 23:00:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.068 23:00:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.068 [2024-11-26 23:00:44.138424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.068 [2024-11-26 23:00:44.138524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.068 [2024-11-26 23:00:44.138559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:05.068 [2024-11-26 23:00:44.138588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.068 [2024-11-26 23:00:44.139039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.068 [2024-11-26 23:00:44.139100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.068 [2024-11-26 23:00:44.139195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:05.068 [2024-11-26 23:00:44.139229] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.068 [2024-11-26 23:00:44.139271] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:05.068 [2024-11-26 23:00:44.139306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.068 [2024-11-26 23:00:44.143182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:05.068 spare 00:16:05.068 23:00:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.068 23:00:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:05.068 [2024-11-26 23:00:44.145029] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.447 "name": "raid_bdev1", 00:16:06.447 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:06.447 "strip_size_kb": 0, 00:16:06.447 "state": "online", 00:16:06.447 "raid_level": "raid1", 00:16:06.447 "superblock": true, 00:16:06.447 "num_base_bdevs": 2, 00:16:06.447 "num_base_bdevs_discovered": 2, 00:16:06.447 "num_base_bdevs_operational": 2, 00:16:06.447 "process": { 00:16:06.447 "type": "rebuild", 00:16:06.447 "target": "spare", 00:16:06.447 "progress": { 00:16:06.447 "blocks": 2560, 00:16:06.447 "percent": 32 00:16:06.447 } 00:16:06.447 }, 00:16:06.447 "base_bdevs_list": [ 00:16:06.447 { 00:16:06.447 "name": "spare", 00:16:06.447 "uuid": "67ac5ab0-d71d-5da2-81ad-6d2920d4d07d", 00:16:06.447 "is_configured": true, 00:16:06.447 "data_offset": 256, 00:16:06.447 "data_size": 7936 00:16:06.447 }, 00:16:06.447 { 00:16:06.447 "name": "BaseBdev2", 00:16:06.447 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:06.447 "is_configured": true, 00:16:06.447 "data_offset": 256, 00:16:06.447 "data_size": 7936 00:16:06.447 } 00:16:06.447 ] 00:16:06.447 }' 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.447 [2024-11-26 23:00:45.283655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.447 [2024-11-26 23:00:45.351038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.447 [2024-11-26 23:00:45.351141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.447 [2024-11-26 23:00:45.351160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.447 [2024-11-26 23:00:45.351167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.447 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.448 "name": "raid_bdev1", 00:16:06.448 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:06.448 "strip_size_kb": 0, 00:16:06.448 "state": "online", 00:16:06.448 "raid_level": "raid1", 00:16:06.448 "superblock": true, 00:16:06.448 "num_base_bdevs": 2, 00:16:06.448 "num_base_bdevs_discovered": 1, 00:16:06.448 "num_base_bdevs_operational": 1, 00:16:06.448 "base_bdevs_list": [ 00:16:06.448 { 00:16:06.448 "name": null, 00:16:06.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.448 "is_configured": false, 00:16:06.448 "data_offset": 0, 00:16:06.448 "data_size": 7936 00:16:06.448 }, 00:16:06.448 { 00:16:06.448 "name": "BaseBdev2", 00:16:06.448 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:06.448 "is_configured": true, 00:16:06.448 "data_offset": 256, 00:16:06.448 "data_size": 7936 00:16:06.448 } 00:16:06.448 ] 00:16:06.448 }' 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.448 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.711 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.711 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.972 "name": "raid_bdev1", 00:16:06.972 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:06.972 "strip_size_kb": 0, 00:16:06.972 "state": "online", 00:16:06.972 "raid_level": "raid1", 00:16:06.972 "superblock": true, 00:16:06.972 "num_base_bdevs": 2, 00:16:06.972 "num_base_bdevs_discovered": 1, 00:16:06.972 "num_base_bdevs_operational": 1, 00:16:06.972 "base_bdevs_list": [ 00:16:06.972 { 00:16:06.972 "name": null, 00:16:06.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.972 "is_configured": false, 00:16:06.972 "data_offset": 0, 00:16:06.972 "data_size": 7936 00:16:06.972 }, 00:16:06.972 { 00:16:06.972 "name": "BaseBdev2", 00:16:06.972 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:06.972 "is_configured": true, 00:16:06.972 "data_offset": 256, 00:16:06.972 "data_size": 7936 00:16:06.972 } 00:16:06.972 ] 00:16:06.972 }' 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.972 [2024-11-26 23:00:45.987400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.972 [2024-11-26 23:00:45.987448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.972 [2024-11-26 23:00:45.987470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:06.972 [2024-11-26 23:00:45.987480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.972 [2024-11-26 23:00:45.987863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.972 [2024-11-26 23:00:45.987879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.972 [2024-11-26 23:00:45.987950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:06.972 [2024-11-26 23:00:45.987963] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.972 [2024-11-26 23:00:45.987974] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.972 [2024-11-26 23:00:45.987984] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:06.972 BaseBdev1 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.972 23:00:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.912 23:00:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.912 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.912 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.912 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.912 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.912 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.188 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.188 "name": "raid_bdev1", 00:16:08.188 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:08.188 "strip_size_kb": 0, 00:16:08.188 "state": "online", 00:16:08.188 "raid_level": "raid1", 00:16:08.188 "superblock": true, 00:16:08.188 "num_base_bdevs": 2, 00:16:08.188 "num_base_bdevs_discovered": 1, 00:16:08.188 "num_base_bdevs_operational": 1, 00:16:08.188 "base_bdevs_list": [ 00:16:08.189 { 00:16:08.189 "name": null, 00:16:08.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.189 "is_configured": false, 00:16:08.189 "data_offset": 0, 00:16:08.189 "data_size": 7936 00:16:08.189 }, 00:16:08.189 { 00:16:08.189 "name": "BaseBdev2", 00:16:08.189 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:08.189 "is_configured": true, 00:16:08.189 "data_offset": 256, 00:16:08.189 "data_size": 7936 00:16:08.189 } 00:16:08.189 ] 00:16:08.189 }' 00:16:08.189 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.189 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.454 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.455 "name": "raid_bdev1", 00:16:08.455 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:08.455 "strip_size_kb": 0, 00:16:08.455 "state": "online", 00:16:08.455 "raid_level": "raid1", 00:16:08.455 "superblock": true, 00:16:08.455 "num_base_bdevs": 2, 00:16:08.455 "num_base_bdevs_discovered": 1, 00:16:08.455 "num_base_bdevs_operational": 1, 00:16:08.455 "base_bdevs_list": [ 00:16:08.455 { 00:16:08.455 "name": null, 00:16:08.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.455 "is_configured": false, 00:16:08.455 "data_offset": 0, 00:16:08.455 "data_size": 7936 00:16:08.455 }, 00:16:08.455 { 00:16:08.455 "name": "BaseBdev2", 00:16:08.455 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:08.455 "is_configured": true, 00:16:08.455 "data_offset": 256, 00:16:08.455 "data_size": 7936 00:16:08.455 } 00:16:08.455 ] 00:16:08.455 }' 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.455 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.714 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.714 [2024-11-26 23:00:47.607833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.714 [2024-11-26 23:00:47.607956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.715 [2024-11-26 23:00:47.607968] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:08.715 request: 00:16:08.715 { 00:16:08.715 "base_bdev": "BaseBdev1", 00:16:08.715 "raid_bdev": "raid_bdev1", 00:16:08.715 "method": "bdev_raid_add_base_bdev", 00:16:08.715 "req_id": 1 00:16:08.715 } 00:16:08.715 Got JSON-RPC error response 00:16:08.715 response: 00:16:08.715 { 00:16:08.715 "code": -22, 00:16:08.715 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:08.715 } 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.715 23:00:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.653 "name": "raid_bdev1", 00:16:09.653 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:09.653 "strip_size_kb": 0, 00:16:09.653 "state": "online", 00:16:09.653 "raid_level": "raid1", 00:16:09.653 "superblock": true, 00:16:09.653 "num_base_bdevs": 2, 00:16:09.653 "num_base_bdevs_discovered": 1, 00:16:09.653 "num_base_bdevs_operational": 1, 00:16:09.653 "base_bdevs_list": [ 00:16:09.653 { 00:16:09.653 "name": null, 00:16:09.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.653 "is_configured": false, 00:16:09.653 "data_offset": 0, 00:16:09.653 "data_size": 7936 00:16:09.653 }, 00:16:09.653 { 00:16:09.653 "name": "BaseBdev2", 00:16:09.653 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:09.653 "is_configured": true, 00:16:09.653 "data_offset": 256, 00:16:09.653 "data_size": 7936 00:16:09.653 } 00:16:09.653 ] 00:16:09.653 }' 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.653 23:00:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.234 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.234 "name": "raid_bdev1", 00:16:10.234 "uuid": "20b055a7-2802-44d7-9512-ea11d1f58adf", 00:16:10.234 "strip_size_kb": 0, 00:16:10.234 "state": "online", 00:16:10.234 "raid_level": "raid1", 00:16:10.234 "superblock": true, 00:16:10.234 "num_base_bdevs": 2, 00:16:10.234 "num_base_bdevs_discovered": 1, 00:16:10.234 "num_base_bdevs_operational": 1, 00:16:10.234 "base_bdevs_list": [ 00:16:10.234 { 00:16:10.234 "name": null, 00:16:10.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.234 "is_configured": false, 00:16:10.234 "data_offset": 0, 00:16:10.234 "data_size": 7936 00:16:10.234 }, 00:16:10.234 { 00:16:10.234 "name": "BaseBdev2", 00:16:10.234 "uuid": "201264a0-9adf-5bcd-8f88-77fa915736d5", 00:16:10.234 "is_configured": true, 00:16:10.234 "data_offset": 256, 00:16:10.234 "data_size": 7936 00:16:10.235 } 00:16:10.235 ] 00:16:10.235 }' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98485 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98485 ']' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98485 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98485 00:16:10.235 killing process with pid 98485 00:16:10.235 Received shutdown signal, test time was about 60.000000 seconds 00:16:10.235 00:16:10.235 Latency(us) 00:16:10.235 [2024-11-26T23:00:49.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.235 [2024-11-26T23:00:49.363Z] =================================================================================================================== 00:16:10.235 [2024-11-26T23:00:49.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98485' 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98485 00:16:10.235 [2024-11-26 23:00:49.232551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.235 [2024-11-26 23:00:49.232679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.235 [2024-11-26 23:00:49.232721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.235 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98485 00:16:10.235 [2024-11-26 23:00:49.232732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:10.235 [2024-11-26 23:00:49.263818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.499 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:10.499 00:16:10.499 real 0m18.310s 00:16:10.499 user 0m24.194s 00:16:10.499 sys 0m2.744s 00:16:10.499 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.499 23:00:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.499 ************************************ 00:16:10.499 END TEST raid_rebuild_test_sb_4k 00:16:10.499 ************************************ 00:16:10.499 23:00:49 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:10.499 23:00:49 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:10.499 23:00:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:10.499 23:00:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.499 23:00:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.499 ************************************ 00:16:10.499 START TEST raid_state_function_test_sb_md_separate 00:16:10.499 ************************************ 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99161 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:10.499 Process raid pid: 99161 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99161' 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99161 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99161 ']' 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.499 23:00:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.760 [2024-11-26 23:00:49.660307] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:10.760 [2024-11-26 23:00:49.660483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.760 [2024-11-26 23:00:49.803135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.760 [2024-11-26 23:00:49.841726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.760 [2024-11-26 23:00:49.868837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.020 [2024-11-26 23:00:49.913308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.020 [2024-11-26 23:00:49.913340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.589 [2024-11-26 23:00:50.485338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.589 [2024-11-26 23:00:50.485391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.589 [2024-11-26 23:00:50.485403] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.589 [2024-11-26 23:00:50.485411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.589 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.590 "name": "Existed_Raid", 00:16:11.590 "uuid": "6e5564a5-0cc9-4c73-ae13-bc3553a235b8", 00:16:11.590 "strip_size_kb": 0, 00:16:11.590 "state": "configuring", 00:16:11.590 "raid_level": "raid1", 00:16:11.590 "superblock": true, 00:16:11.590 "num_base_bdevs": 2, 00:16:11.590 "num_base_bdevs_discovered": 0, 00:16:11.590 "num_base_bdevs_operational": 2, 00:16:11.590 "base_bdevs_list": [ 00:16:11.590 { 00:16:11.590 "name": "BaseBdev1", 00:16:11.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.590 "is_configured": false, 00:16:11.590 "data_offset": 0, 00:16:11.590 "data_size": 0 00:16:11.590 }, 00:16:11.590 { 00:16:11.590 "name": "BaseBdev2", 00:16:11.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.590 "is_configured": false, 00:16:11.590 "data_offset": 0, 00:16:11.590 "data_size": 0 00:16:11.590 } 00:16:11.590 ] 00:16:11.590 }' 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.590 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.850 [2024-11-26 23:00:50.909327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.850 [2024-11-26 23:00:50.909359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.850 [2024-11-26 23:00:50.917366] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.850 [2024-11-26 23:00:50.917400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.850 [2024-11-26 23:00:50.917409] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.850 [2024-11-26 23:00:50.917416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.850 [2024-11-26 23:00:50.934907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.850 BaseBdev1 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:11.850 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.851 [ 00:16:11.851 { 00:16:11.851 "name": "BaseBdev1", 00:16:11.851 "aliases": [ 00:16:11.851 "a58af5c8-dec6-45c7-bcee-3895c9458fc6" 00:16:11.851 ], 00:16:11.851 "product_name": "Malloc disk", 00:16:11.851 "block_size": 4096, 00:16:11.851 "num_blocks": 8192, 00:16:11.851 "uuid": "a58af5c8-dec6-45c7-bcee-3895c9458fc6", 00:16:11.851 "md_size": 32, 00:16:11.851 "md_interleave": false, 00:16:11.851 "dif_type": 0, 00:16:11.851 "assigned_rate_limits": { 00:16:11.851 "rw_ios_per_sec": 0, 00:16:11.851 "rw_mbytes_per_sec": 0, 00:16:11.851 "r_mbytes_per_sec": 0, 00:16:11.851 "w_mbytes_per_sec": 0 00:16:11.851 }, 00:16:11.851 "claimed": true, 00:16:11.851 "claim_type": "exclusive_write", 00:16:11.851 "zoned": false, 00:16:11.851 "supported_io_types": { 00:16:11.851 "read": true, 00:16:11.851 "write": true, 00:16:11.851 "unmap": true, 00:16:11.851 "flush": true, 00:16:11.851 "reset": true, 00:16:11.851 "nvme_admin": false, 00:16:11.851 "nvme_io": false, 00:16:11.851 "nvme_io_md": false, 00:16:11.851 "write_zeroes": true, 00:16:11.851 "zcopy": true, 00:16:11.851 "get_zone_info": false, 00:16:11.851 "zone_management": false, 00:16:11.851 "zone_append": false, 00:16:11.851 "compare": false, 00:16:11.851 "compare_and_write": false, 00:16:11.851 "abort": true, 00:16:11.851 "seek_hole": false, 00:16:11.851 "seek_data": false, 00:16:11.851 "copy": true, 00:16:11.851 "nvme_iov_md": false 00:16:11.851 }, 00:16:11.851 "memory_domains": [ 00:16:11.851 { 00:16:11.851 "dma_device_id": "system", 00:16:11.851 "dma_device_type": 1 00:16:11.851 }, 00:16:11.851 { 00:16:11.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.851 "dma_device_type": 2 00:16:11.851 } 00:16:11.851 ], 00:16:11.851 "driver_specific": {} 00:16:11.851 } 00:16:11.851 ] 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.851 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.112 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.112 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.112 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.112 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.112 23:00:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.112 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.112 "name": "Existed_Raid", 00:16:12.112 "uuid": "d08ef5f9-bcc9-4379-b062-e73fa0e6adc7", 00:16:12.112 "strip_size_kb": 0, 00:16:12.112 "state": "configuring", 00:16:12.112 "raid_level": "raid1", 00:16:12.112 "superblock": true, 00:16:12.112 "num_base_bdevs": 2, 00:16:12.112 "num_base_bdevs_discovered": 1, 00:16:12.112 "num_base_bdevs_operational": 2, 00:16:12.112 "base_bdevs_list": [ 00:16:12.112 { 00:16:12.112 "name": "BaseBdev1", 00:16:12.112 "uuid": "a58af5c8-dec6-45c7-bcee-3895c9458fc6", 00:16:12.112 "is_configured": true, 00:16:12.112 "data_offset": 256, 00:16:12.112 "data_size": 7936 00:16:12.112 }, 00:16:12.112 { 00:16:12.112 "name": "BaseBdev2", 00:16:12.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.112 "is_configured": false, 00:16:12.112 "data_offset": 0, 00:16:12.112 "data_size": 0 00:16:12.112 } 00:16:12.112 ] 00:16:12.112 }' 00:16:12.112 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.112 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 [2024-11-26 23:00:51.379077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.372 [2024-11-26 23:00:51.379126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 [2024-11-26 23:00:51.391126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.372 [2024-11-26 23:00:51.392825] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.372 [2024-11-26 23:00:51.392864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.372 "name": "Existed_Raid", 00:16:12.372 "uuid": "fcb34129-9dce-480c-9bb8-cfd2c631b9bb", 00:16:12.372 "strip_size_kb": 0, 00:16:12.372 "state": "configuring", 00:16:12.372 "raid_level": "raid1", 00:16:12.372 "superblock": true, 00:16:12.372 "num_base_bdevs": 2, 00:16:12.372 "num_base_bdevs_discovered": 1, 00:16:12.372 "num_base_bdevs_operational": 2, 00:16:12.372 "base_bdevs_list": [ 00:16:12.372 { 00:16:12.372 "name": "BaseBdev1", 00:16:12.372 "uuid": "a58af5c8-dec6-45c7-bcee-3895c9458fc6", 00:16:12.372 "is_configured": true, 00:16:12.372 "data_offset": 256, 00:16:12.372 "data_size": 7936 00:16:12.372 }, 00:16:12.372 { 00:16:12.372 "name": "BaseBdev2", 00:16:12.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.372 "is_configured": false, 00:16:12.372 "data_offset": 0, 00:16:12.372 "data_size": 0 00:16:12.372 } 00:16:12.372 ] 00:16:12.372 }' 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.372 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.949 [2024-11-26 23:00:51.870876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.949 [2024-11-26 23:00:51.871041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:12.949 [2024-11-26 23:00:51.871063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:12.949 [2024-11-26 23:00:51.871158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.949 [2024-11-26 23:00:51.871286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:12.949 [2024-11-26 23:00:51.871303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:12.949 [2024-11-26 23:00:51.871373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.949 BaseBdev2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.949 [ 00:16:12.949 { 00:16:12.949 "name": "BaseBdev2", 00:16:12.949 "aliases": [ 00:16:12.949 "ffdbeac7-547d-4ba0-a178-40653daf39fc" 00:16:12.949 ], 00:16:12.949 "product_name": "Malloc disk", 00:16:12.949 "block_size": 4096, 00:16:12.949 "num_blocks": 8192, 00:16:12.949 "uuid": "ffdbeac7-547d-4ba0-a178-40653daf39fc", 00:16:12.949 "md_size": 32, 00:16:12.949 "md_interleave": false, 00:16:12.949 "dif_type": 0, 00:16:12.949 "assigned_rate_limits": { 00:16:12.949 "rw_ios_per_sec": 0, 00:16:12.949 "rw_mbytes_per_sec": 0, 00:16:12.949 "r_mbytes_per_sec": 0, 00:16:12.949 "w_mbytes_per_sec": 0 00:16:12.949 }, 00:16:12.949 "claimed": true, 00:16:12.949 "claim_type": "exclusive_write", 00:16:12.949 "zoned": false, 00:16:12.949 "supported_io_types": { 00:16:12.949 "read": true, 00:16:12.949 "write": true, 00:16:12.949 "unmap": true, 00:16:12.949 "flush": true, 00:16:12.949 "reset": true, 00:16:12.949 "nvme_admin": false, 00:16:12.949 "nvme_io": false, 00:16:12.949 "nvme_io_md": false, 00:16:12.949 "write_zeroes": true, 00:16:12.949 "zcopy": true, 00:16:12.949 "get_zone_info": false, 00:16:12.949 "zone_management": false, 00:16:12.949 "zone_append": false, 00:16:12.949 "compare": false, 00:16:12.949 "compare_and_write": false, 00:16:12.949 "abort": true, 00:16:12.949 "seek_hole": false, 00:16:12.949 "seek_data": false, 00:16:12.949 "copy": true, 00:16:12.949 "nvme_iov_md": false 00:16:12.949 }, 00:16:12.949 "memory_domains": [ 00:16:12.949 { 00:16:12.949 "dma_device_id": "system", 00:16:12.949 "dma_device_type": 1 00:16:12.949 }, 00:16:12.949 { 00:16:12.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.949 "dma_device_type": 2 00:16:12.949 } 00:16:12.949 ], 00:16:12.949 "driver_specific": {} 00:16:12.949 } 00:16:12.949 ] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.949 "name": "Existed_Raid", 00:16:12.949 "uuid": "fcb34129-9dce-480c-9bb8-cfd2c631b9bb", 00:16:12.949 "strip_size_kb": 0, 00:16:12.949 "state": "online", 00:16:12.949 "raid_level": "raid1", 00:16:12.949 "superblock": true, 00:16:12.949 "num_base_bdevs": 2, 00:16:12.949 "num_base_bdevs_discovered": 2, 00:16:12.949 "num_base_bdevs_operational": 2, 00:16:12.949 "base_bdevs_list": [ 00:16:12.949 { 00:16:12.949 "name": "BaseBdev1", 00:16:12.949 "uuid": "a58af5c8-dec6-45c7-bcee-3895c9458fc6", 00:16:12.949 "is_configured": true, 00:16:12.949 "data_offset": 256, 00:16:12.949 "data_size": 7936 00:16:12.949 }, 00:16:12.949 { 00:16:12.949 "name": "BaseBdev2", 00:16:12.949 "uuid": "ffdbeac7-547d-4ba0-a178-40653daf39fc", 00:16:12.949 "is_configured": true, 00:16:12.949 "data_offset": 256, 00:16:12.949 "data_size": 7936 00:16:12.949 } 00:16:12.949 ] 00:16:12.949 }' 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.949 23:00:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.209 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.469 [2024-11-26 23:00:52.347337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.469 "name": "Existed_Raid", 00:16:13.469 "aliases": [ 00:16:13.469 "fcb34129-9dce-480c-9bb8-cfd2c631b9bb" 00:16:13.469 ], 00:16:13.469 "product_name": "Raid Volume", 00:16:13.469 "block_size": 4096, 00:16:13.469 "num_blocks": 7936, 00:16:13.469 "uuid": "fcb34129-9dce-480c-9bb8-cfd2c631b9bb", 00:16:13.469 "md_size": 32, 00:16:13.469 "md_interleave": false, 00:16:13.469 "dif_type": 0, 00:16:13.469 "assigned_rate_limits": { 00:16:13.469 "rw_ios_per_sec": 0, 00:16:13.469 "rw_mbytes_per_sec": 0, 00:16:13.469 "r_mbytes_per_sec": 0, 00:16:13.469 "w_mbytes_per_sec": 0 00:16:13.469 }, 00:16:13.469 "claimed": false, 00:16:13.469 "zoned": false, 00:16:13.469 "supported_io_types": { 00:16:13.469 "read": true, 00:16:13.469 "write": true, 00:16:13.469 "unmap": false, 00:16:13.469 "flush": false, 00:16:13.469 "reset": true, 00:16:13.469 "nvme_admin": false, 00:16:13.469 "nvme_io": false, 00:16:13.469 "nvme_io_md": false, 00:16:13.469 "write_zeroes": true, 00:16:13.469 "zcopy": false, 00:16:13.469 "get_zone_info": false, 00:16:13.469 "zone_management": false, 00:16:13.469 "zone_append": false, 00:16:13.469 "compare": false, 00:16:13.469 "compare_and_write": false, 00:16:13.469 "abort": false, 00:16:13.469 "seek_hole": false, 00:16:13.469 "seek_data": false, 00:16:13.469 "copy": false, 00:16:13.469 "nvme_iov_md": false 00:16:13.469 }, 00:16:13.469 "memory_domains": [ 00:16:13.469 { 00:16:13.469 "dma_device_id": "system", 00:16:13.469 "dma_device_type": 1 00:16:13.469 }, 00:16:13.469 { 00:16:13.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.469 "dma_device_type": 2 00:16:13.469 }, 00:16:13.469 { 00:16:13.469 "dma_device_id": "system", 00:16:13.469 "dma_device_type": 1 00:16:13.469 }, 00:16:13.469 { 00:16:13.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.469 "dma_device_type": 2 00:16:13.469 } 00:16:13.469 ], 00:16:13.469 "driver_specific": { 00:16:13.469 "raid": { 00:16:13.469 "uuid": "fcb34129-9dce-480c-9bb8-cfd2c631b9bb", 00:16:13.469 "strip_size_kb": 0, 00:16:13.469 "state": "online", 00:16:13.469 "raid_level": "raid1", 00:16:13.469 "superblock": true, 00:16:13.469 "num_base_bdevs": 2, 00:16:13.469 "num_base_bdevs_discovered": 2, 00:16:13.469 "num_base_bdevs_operational": 2, 00:16:13.469 "base_bdevs_list": [ 00:16:13.469 { 00:16:13.469 "name": "BaseBdev1", 00:16:13.469 "uuid": "a58af5c8-dec6-45c7-bcee-3895c9458fc6", 00:16:13.469 "is_configured": true, 00:16:13.469 "data_offset": 256, 00:16:13.469 "data_size": 7936 00:16:13.469 }, 00:16:13.469 { 00:16:13.469 "name": "BaseBdev2", 00:16:13.469 "uuid": "ffdbeac7-547d-4ba0-a178-40653daf39fc", 00:16:13.469 "is_configured": true, 00:16:13.469 "data_offset": 256, 00:16:13.469 "data_size": 7936 00:16:13.469 } 00:16:13.469 ] 00:16:13.469 } 00:16:13.469 } 00:16:13.469 }' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:13.469 BaseBdev2' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:13.469 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:13.470 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:13.470 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.470 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.470 [2024-11-26 23:00:52.591183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.730 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.730 "name": "Existed_Raid", 00:16:13.730 "uuid": "fcb34129-9dce-480c-9bb8-cfd2c631b9bb", 00:16:13.730 "strip_size_kb": 0, 00:16:13.730 "state": "online", 00:16:13.730 "raid_level": "raid1", 00:16:13.730 "superblock": true, 00:16:13.730 "num_base_bdevs": 2, 00:16:13.730 "num_base_bdevs_discovered": 1, 00:16:13.730 "num_base_bdevs_operational": 1, 00:16:13.730 "base_bdevs_list": [ 00:16:13.730 { 00:16:13.730 "name": null, 00:16:13.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.730 "is_configured": false, 00:16:13.730 "data_offset": 0, 00:16:13.730 "data_size": 7936 00:16:13.730 }, 00:16:13.730 { 00:16:13.730 "name": "BaseBdev2", 00:16:13.730 "uuid": "ffdbeac7-547d-4ba0-a178-40653daf39fc", 00:16:13.730 "is_configured": true, 00:16:13.730 "data_offset": 256, 00:16:13.730 "data_size": 7936 00:16:13.730 } 00:16:13.730 ] 00:16:13.730 }' 00:16:13.731 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.731 23:00:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.991 [2024-11-26 23:00:53.075575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.991 [2024-11-26 23:00:53.075672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.991 [2024-11-26 23:00:53.088154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.991 [2024-11-26 23:00:53.088203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.991 [2024-11-26 23:00:53.088212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.991 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99161 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99161 ']' 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99161 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99161 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.252 killing process with pid 99161 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99161' 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99161 00:16:14.252 [2024-11-26 23:00:53.188209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.252 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99161 00:16:14.252 [2024-11-26 23:00:53.189150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.512 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.512 00:16:14.512 real 0m3.862s 00:16:14.512 user 0m6.018s 00:16:14.512 sys 0m0.907s 00:16:14.512 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.512 23:00:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 ************************************ 00:16:14.512 END TEST raid_state_function_test_sb_md_separate 00:16:14.512 ************************************ 00:16:14.512 23:00:53 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:14.512 23:00:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:14.512 23:00:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.512 23:00:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 ************************************ 00:16:14.512 START TEST raid_superblock_test_md_separate 00:16:14.512 ************************************ 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99402 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99402 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99402 ']' 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.512 23:00:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 [2024-11-26 23:00:53.588440] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:14.512 [2024-11-26 23:00:53.588559] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99402 ] 00:16:14.772 [2024-11-26 23:00:53.727008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:14.772 [2024-11-26 23:00:53.766310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.772 [2024-11-26 23:00:53.791925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.772 [2024-11-26 23:00:53.835080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.772 [2024-11-26 23:00:53.835117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.343 malloc1 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.343 [2024-11-26 23:00:54.428889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.343 [2024-11-26 23:00:54.428949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.343 [2024-11-26 23:00:54.428986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.343 [2024-11-26 23:00:54.428995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.343 [2024-11-26 23:00:54.430890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.343 [2024-11-26 23:00:54.430931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.343 pt1 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.343 malloc2 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.343 [2024-11-26 23:00:54.458204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.343 [2024-11-26 23:00:54.458294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.343 [2024-11-26 23:00:54.458314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.343 [2024-11-26 23:00:54.458321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.343 [2024-11-26 23:00:54.460136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.343 [2024-11-26 23:00:54.460172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.343 pt2 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.343 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.603 [2024-11-26 23:00:54.470235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:15.603 [2024-11-26 23:00:54.472056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.603 [2024-11-26 23:00:54.472213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:15.603 [2024-11-26 23:00:54.472228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:15.603 [2024-11-26 23:00:54.472334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:15.603 [2024-11-26 23:00:54.472447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:15.603 [2024-11-26 23:00:54.472465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:15.603 [2024-11-26 23:00:54.472543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.603 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.604 "name": "raid_bdev1", 00:16:15.604 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:15.604 "strip_size_kb": 0, 00:16:15.604 "state": "online", 00:16:15.604 "raid_level": "raid1", 00:16:15.604 "superblock": true, 00:16:15.604 "num_base_bdevs": 2, 00:16:15.604 "num_base_bdevs_discovered": 2, 00:16:15.604 "num_base_bdevs_operational": 2, 00:16:15.604 "base_bdevs_list": [ 00:16:15.604 { 00:16:15.604 "name": "pt1", 00:16:15.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.604 "is_configured": true, 00:16:15.604 "data_offset": 256, 00:16:15.604 "data_size": 7936 00:16:15.604 }, 00:16:15.604 { 00:16:15.604 "name": "pt2", 00:16:15.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.604 "is_configured": true, 00:16:15.604 "data_offset": 256, 00:16:15.604 "data_size": 7936 00:16:15.604 } 00:16:15.604 ] 00:16:15.604 }' 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.604 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.863 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.864 [2024-11-26 23:00:54.926613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.864 23:00:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.864 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:15.864 "name": "raid_bdev1", 00:16:15.864 "aliases": [ 00:16:15.864 "83d48595-5e51-4eb8-94c9-dcc65133b86f" 00:16:15.864 ], 00:16:15.864 "product_name": "Raid Volume", 00:16:15.864 "block_size": 4096, 00:16:15.864 "num_blocks": 7936, 00:16:15.864 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:15.864 "md_size": 32, 00:16:15.864 "md_interleave": false, 00:16:15.864 "dif_type": 0, 00:16:15.864 "assigned_rate_limits": { 00:16:15.864 "rw_ios_per_sec": 0, 00:16:15.864 "rw_mbytes_per_sec": 0, 00:16:15.864 "r_mbytes_per_sec": 0, 00:16:15.864 "w_mbytes_per_sec": 0 00:16:15.864 }, 00:16:15.864 "claimed": false, 00:16:15.864 "zoned": false, 00:16:15.864 "supported_io_types": { 00:16:15.864 "read": true, 00:16:15.864 "write": true, 00:16:15.864 "unmap": false, 00:16:15.864 "flush": false, 00:16:15.864 "reset": true, 00:16:15.864 "nvme_admin": false, 00:16:15.864 "nvme_io": false, 00:16:15.864 "nvme_io_md": false, 00:16:15.864 "write_zeroes": true, 00:16:15.864 "zcopy": false, 00:16:15.864 "get_zone_info": false, 00:16:15.864 "zone_management": false, 00:16:15.864 "zone_append": false, 00:16:15.864 "compare": false, 00:16:15.864 "compare_and_write": false, 00:16:15.864 "abort": false, 00:16:15.864 "seek_hole": false, 00:16:15.864 "seek_data": false, 00:16:15.864 "copy": false, 00:16:15.864 "nvme_iov_md": false 00:16:15.864 }, 00:16:15.864 "memory_domains": [ 00:16:15.864 { 00:16:15.864 "dma_device_id": "system", 00:16:15.864 "dma_device_type": 1 00:16:15.864 }, 00:16:15.864 { 00:16:15.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.864 "dma_device_type": 2 00:16:15.864 }, 00:16:15.864 { 00:16:15.864 "dma_device_id": "system", 00:16:15.864 "dma_device_type": 1 00:16:15.864 }, 00:16:15.864 { 00:16:15.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.864 "dma_device_type": 2 00:16:15.864 } 00:16:15.864 ], 00:16:15.864 "driver_specific": { 00:16:15.864 "raid": { 00:16:15.864 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:15.864 "strip_size_kb": 0, 00:16:15.864 "state": "online", 00:16:15.864 "raid_level": "raid1", 00:16:15.864 "superblock": true, 00:16:15.864 "num_base_bdevs": 2, 00:16:15.864 "num_base_bdevs_discovered": 2, 00:16:15.864 "num_base_bdevs_operational": 2, 00:16:15.864 "base_bdevs_list": [ 00:16:15.864 { 00:16:15.864 "name": "pt1", 00:16:15.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.864 "is_configured": true, 00:16:15.864 "data_offset": 256, 00:16:15.864 "data_size": 7936 00:16:15.864 }, 00:16:15.864 { 00:16:15.864 "name": "pt2", 00:16:15.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.864 "is_configured": true, 00:16:15.864 "data_offset": 256, 00:16:15.864 "data_size": 7936 00:16:15.864 } 00:16:15.864 ] 00:16:15.864 } 00:16:15.864 } 00:16:15.864 }' 00:16:15.864 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.124 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.124 pt2' 00:16:16.124 23:00:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.124 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:16.125 [2024-11-26 23:00:55.134603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=83d48595-5e51-4eb8-94c9-dcc65133b86f 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 83d48595-5e51-4eb8-94c9-dcc65133b86f ']' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 [2024-11-26 23:00:55.178395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.125 [2024-11-26 23:00:55.178418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.125 [2024-11-26 23:00:55.178497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.125 [2024-11-26 23:00:55.178555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.125 [2024-11-26 23:00:55.178569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.125 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.387 [2024-11-26 23:00:55.314452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:16.387 [2024-11-26 23:00:55.316239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:16.387 [2024-11-26 23:00:55.316306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:16.387 [2024-11-26 23:00:55.316347] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:16.387 [2024-11-26 23:00:55.316361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.387 [2024-11-26 23:00:55.316370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:16.387 request: 00:16:16.387 { 00:16:16.387 "name": "raid_bdev1", 00:16:16.387 "raid_level": "raid1", 00:16:16.387 "base_bdevs": [ 00:16:16.387 "malloc1", 00:16:16.387 "malloc2" 00:16:16.387 ], 00:16:16.387 "superblock": false, 00:16:16.387 "method": "bdev_raid_create", 00:16:16.387 "req_id": 1 00:16:16.387 } 00:16:16.387 Got JSON-RPC error response 00:16:16.387 response: 00:16:16.387 { 00:16:16.387 "code": -17, 00:16:16.387 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:16.387 } 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.387 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.387 [2024-11-26 23:00:55.378437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.387 [2024-11-26 23:00:55.378536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.388 [2024-11-26 23:00:55.378556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.388 [2024-11-26 23:00:55.378568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.388 [2024-11-26 23:00:55.380429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.388 [2024-11-26 23:00:55.380466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.388 [2024-11-26 23:00:55.380504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:16.388 [2024-11-26 23:00:55.380543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.388 pt1 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.388 "name": "raid_bdev1", 00:16:16.388 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:16.388 "strip_size_kb": 0, 00:16:16.388 "state": "configuring", 00:16:16.388 "raid_level": "raid1", 00:16:16.388 "superblock": true, 00:16:16.388 "num_base_bdevs": 2, 00:16:16.388 "num_base_bdevs_discovered": 1, 00:16:16.388 "num_base_bdevs_operational": 2, 00:16:16.388 "base_bdevs_list": [ 00:16:16.388 { 00:16:16.388 "name": "pt1", 00:16:16.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.388 "is_configured": true, 00:16:16.388 "data_offset": 256, 00:16:16.388 "data_size": 7936 00:16:16.388 }, 00:16:16.388 { 00:16:16.388 "name": null, 00:16:16.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.388 "is_configured": false, 00:16:16.388 "data_offset": 256, 00:16:16.388 "data_size": 7936 00:16:16.388 } 00:16:16.388 ] 00:16:16.388 }' 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.388 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.962 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.963 [2024-11-26 23:00:55.830558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.963 [2024-11-26 23:00:55.830656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.963 [2024-11-26 23:00:55.830699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:16.963 [2024-11-26 23:00:55.830744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.963 [2024-11-26 23:00:55.830884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.963 [2024-11-26 23:00:55.830965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.963 [2024-11-26 23:00:55.831030] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:16.963 [2024-11-26 23:00:55.831079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.963 [2024-11-26 23:00:55.831169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:16.963 [2024-11-26 23:00:55.831207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:16.963 [2024-11-26 23:00:55.831301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:16.963 [2024-11-26 23:00:55.831421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:16.963 [2024-11-26 23:00:55.831456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:16.963 [2024-11-26 23:00:55.831555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.963 pt2 00:16:16.963 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.963 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:16.963 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.963 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.963 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.964 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.964 "name": "raid_bdev1", 00:16:16.964 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:16.964 "strip_size_kb": 0, 00:16:16.964 "state": "online", 00:16:16.964 "raid_level": "raid1", 00:16:16.964 "superblock": true, 00:16:16.964 "num_base_bdevs": 2, 00:16:16.964 "num_base_bdevs_discovered": 2, 00:16:16.964 "num_base_bdevs_operational": 2, 00:16:16.964 "base_bdevs_list": [ 00:16:16.964 { 00:16:16.964 "name": "pt1", 00:16:16.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.965 "is_configured": true, 00:16:16.965 "data_offset": 256, 00:16:16.965 "data_size": 7936 00:16:16.965 }, 00:16:16.965 { 00:16:16.965 "name": "pt2", 00:16:16.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.965 "is_configured": true, 00:16:16.965 "data_offset": 256, 00:16:16.965 "data_size": 7936 00:16:16.965 } 00:16:16.965 ] 00:16:16.965 }' 00:16:16.965 23:00:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.965 23:00:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.231 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.232 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.232 [2024-11-26 23:00:56.286937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.232 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.232 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.232 "name": "raid_bdev1", 00:16:17.232 "aliases": [ 00:16:17.232 "83d48595-5e51-4eb8-94c9-dcc65133b86f" 00:16:17.232 ], 00:16:17.232 "product_name": "Raid Volume", 00:16:17.232 "block_size": 4096, 00:16:17.232 "num_blocks": 7936, 00:16:17.232 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:17.232 "md_size": 32, 00:16:17.232 "md_interleave": false, 00:16:17.232 "dif_type": 0, 00:16:17.232 "assigned_rate_limits": { 00:16:17.232 "rw_ios_per_sec": 0, 00:16:17.232 "rw_mbytes_per_sec": 0, 00:16:17.232 "r_mbytes_per_sec": 0, 00:16:17.232 "w_mbytes_per_sec": 0 00:16:17.232 }, 00:16:17.232 "claimed": false, 00:16:17.232 "zoned": false, 00:16:17.232 "supported_io_types": { 00:16:17.232 "read": true, 00:16:17.232 "write": true, 00:16:17.232 "unmap": false, 00:16:17.232 "flush": false, 00:16:17.232 "reset": true, 00:16:17.232 "nvme_admin": false, 00:16:17.232 "nvme_io": false, 00:16:17.232 "nvme_io_md": false, 00:16:17.232 "write_zeroes": true, 00:16:17.232 "zcopy": false, 00:16:17.232 "get_zone_info": false, 00:16:17.232 "zone_management": false, 00:16:17.232 "zone_append": false, 00:16:17.232 "compare": false, 00:16:17.232 "compare_and_write": false, 00:16:17.232 "abort": false, 00:16:17.232 "seek_hole": false, 00:16:17.232 "seek_data": false, 00:16:17.232 "copy": false, 00:16:17.232 "nvme_iov_md": false 00:16:17.232 }, 00:16:17.232 "memory_domains": [ 00:16:17.232 { 00:16:17.232 "dma_device_id": "system", 00:16:17.232 "dma_device_type": 1 00:16:17.232 }, 00:16:17.232 { 00:16:17.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.232 "dma_device_type": 2 00:16:17.232 }, 00:16:17.232 { 00:16:17.232 "dma_device_id": "system", 00:16:17.232 "dma_device_type": 1 00:16:17.232 }, 00:16:17.232 { 00:16:17.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.232 "dma_device_type": 2 00:16:17.232 } 00:16:17.232 ], 00:16:17.232 "driver_specific": { 00:16:17.232 "raid": { 00:16:17.232 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:17.232 "strip_size_kb": 0, 00:16:17.232 "state": "online", 00:16:17.232 "raid_level": "raid1", 00:16:17.232 "superblock": true, 00:16:17.232 "num_base_bdevs": 2, 00:16:17.232 "num_base_bdevs_discovered": 2, 00:16:17.232 "num_base_bdevs_operational": 2, 00:16:17.232 "base_bdevs_list": [ 00:16:17.232 { 00:16:17.232 "name": "pt1", 00:16:17.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.232 "is_configured": true, 00:16:17.232 "data_offset": 256, 00:16:17.232 "data_size": 7936 00:16:17.232 }, 00:16:17.232 { 00:16:17.232 "name": "pt2", 00:16:17.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.232 "is_configured": true, 00:16:17.232 "data_offset": 256, 00:16:17.232 "data_size": 7936 00:16:17.232 } 00:16:17.232 ] 00:16:17.232 } 00:16:17.232 } 00:16:17.232 }' 00:16:17.232 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:17.492 pt2' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.492 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.492 [2024-11-26 23:00:56.527005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 83d48595-5e51-4eb8-94c9-dcc65133b86f '!=' 83d48595-5e51-4eb8-94c9-dcc65133b86f ']' 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 [2024-11-26 23:00:56.550771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.493 "name": "raid_bdev1", 00:16:17.493 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:17.493 "strip_size_kb": 0, 00:16:17.493 "state": "online", 00:16:17.493 "raid_level": "raid1", 00:16:17.493 "superblock": true, 00:16:17.493 "num_base_bdevs": 2, 00:16:17.493 "num_base_bdevs_discovered": 1, 00:16:17.493 "num_base_bdevs_operational": 1, 00:16:17.493 "base_bdevs_list": [ 00:16:17.493 { 00:16:17.493 "name": null, 00:16:17.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.493 "is_configured": false, 00:16:17.493 "data_offset": 0, 00:16:17.493 "data_size": 7936 00:16:17.493 }, 00:16:17.493 { 00:16:17.493 "name": "pt2", 00:16:17.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.493 "is_configured": true, 00:16:17.493 "data_offset": 256, 00:16:17.493 "data_size": 7936 00:16:17.493 } 00:16:17.493 ] 00:16:17.493 }' 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.493 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 23:00:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.063 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.063 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 [2024-11-26 23:00:56.994904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.063 [2024-11-26 23:00:56.994934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.063 [2024-11-26 23:00:56.994999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.063 [2024-11-26 23:00:56.995034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.063 [2024-11-26 23:00:56.995044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:18.063 23:00:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 [2024-11-26 23:00:57.050926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.063 [2024-11-26 23:00:57.050969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.063 [2024-11-26 23:00:57.050982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:18.063 [2024-11-26 23:00:57.050991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.063 [2024-11-26 23:00:57.052823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.063 [2024-11-26 23:00:57.052858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.063 [2024-11-26 23:00:57.052897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.063 [2024-11-26 23:00:57.052926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.063 [2024-11-26 23:00:57.052982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.063 [2024-11-26 23:00:57.052991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:18.063 [2024-11-26 23:00:57.053057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:18.063 [2024-11-26 23:00:57.053137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.063 [2024-11-26 23:00:57.053148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:18.063 [2024-11-26 23:00:57.053214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.063 pt2 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.063 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.064 "name": "raid_bdev1", 00:16:18.064 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:18.064 "strip_size_kb": 0, 00:16:18.064 "state": "online", 00:16:18.064 "raid_level": "raid1", 00:16:18.064 "superblock": true, 00:16:18.064 "num_base_bdevs": 2, 00:16:18.064 "num_base_bdevs_discovered": 1, 00:16:18.064 "num_base_bdevs_operational": 1, 00:16:18.064 "base_bdevs_list": [ 00:16:18.064 { 00:16:18.064 "name": null, 00:16:18.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.064 "is_configured": false, 00:16:18.064 "data_offset": 256, 00:16:18.064 "data_size": 7936 00:16:18.064 }, 00:16:18.064 { 00:16:18.064 "name": "pt2", 00:16:18.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.064 "is_configured": true, 00:16:18.064 "data_offset": 256, 00:16:18.064 "data_size": 7936 00:16:18.064 } 00:16:18.064 ] 00:16:18.064 }' 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.064 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.635 [2024-11-26 23:00:57.471038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.635 [2024-11-26 23:00:57.471066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.635 [2024-11-26 23:00:57.471108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.635 [2024-11-26 23:00:57.471144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.635 [2024-11-26 23:00:57.471152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.635 [2024-11-26 23:00:57.535074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.635 [2024-11-26 23:00:57.535134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.635 [2024-11-26 23:00:57.535153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:18.635 [2024-11-26 23:00:57.535160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.635 [2024-11-26 23:00:57.536995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.635 [2024-11-26 23:00:57.537026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.635 [2024-11-26 23:00:57.537067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.635 [2024-11-26 23:00:57.537089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.635 [2024-11-26 23:00:57.537176] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:18.635 [2024-11-26 23:00:57.537192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.635 [2024-11-26 23:00:57.537206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:18.635 [2024-11-26 23:00:57.537240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.635 [2024-11-26 23:00:57.537311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:18.635 [2024-11-26 23:00:57.537319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:18.635 [2024-11-26 23:00:57.537370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:18.635 [2024-11-26 23:00:57.537441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:18.635 [2024-11-26 23:00:57.537450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:18.635 [2024-11-26 23:00:57.537514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.635 pt1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.635 "name": "raid_bdev1", 00:16:18.635 "uuid": "83d48595-5e51-4eb8-94c9-dcc65133b86f", 00:16:18.635 "strip_size_kb": 0, 00:16:18.635 "state": "online", 00:16:18.635 "raid_level": "raid1", 00:16:18.635 "superblock": true, 00:16:18.635 "num_base_bdevs": 2, 00:16:18.635 "num_base_bdevs_discovered": 1, 00:16:18.635 "num_base_bdevs_operational": 1, 00:16:18.635 "base_bdevs_list": [ 00:16:18.635 { 00:16:18.635 "name": null, 00:16:18.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.635 "is_configured": false, 00:16:18.635 "data_offset": 256, 00:16:18.635 "data_size": 7936 00:16:18.635 }, 00:16:18.635 { 00:16:18.635 "name": "pt2", 00:16:18.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.635 "is_configured": true, 00:16:18.635 "data_offset": 256, 00:16:18.635 "data_size": 7936 00:16:18.635 } 00:16:18.635 ] 00:16:18.635 }' 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.635 23:00:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.205 [2024-11-26 23:00:58.091457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 83d48595-5e51-4eb8-94c9-dcc65133b86f '!=' 83d48595-5e51-4eb8-94c9-dcc65133b86f ']' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99402 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99402 ']' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99402 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99402 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.205 killing process with pid 99402 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99402' 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99402 00:16:19.205 [2024-11-26 23:00:58.169821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.205 [2024-11-26 23:00:58.169889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.205 [2024-11-26 23:00:58.169924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.205 [2024-11-26 23:00:58.169934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:19.205 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99402 00:16:19.205 [2024-11-26 23:00:58.193881] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.465 23:00:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:19.466 00:16:19.466 real 0m4.925s 00:16:19.466 user 0m8.048s 00:16:19.466 sys 0m1.081s 00:16:19.466 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.466 23:00:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.466 ************************************ 00:16:19.466 END TEST raid_superblock_test_md_separate 00:16:19.466 ************************************ 00:16:19.466 23:00:58 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:19.466 23:00:58 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:19.466 23:00:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:19.466 23:00:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.466 23:00:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.466 ************************************ 00:16:19.466 START TEST raid_rebuild_test_sb_md_separate 00:16:19.466 ************************************ 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99716 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99716 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99716 ']' 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.466 23:00:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:19.726 Zero copy mechanism will not be used. 00:16:19.726 [2024-11-26 23:00:58.607428] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:19.726 [2024-11-26 23:00:58.607554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99716 ] 00:16:19.726 [2024-11-26 23:00:58.747458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.726 [2024-11-26 23:00:58.787583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.726 [2024-11-26 23:00:58.814340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.986 [2024-11-26 23:00:58.858817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.986 [2024-11-26 23:00:58.858856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 BaseBdev1_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 [2024-11-26 23:00:59.428653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:20.601 [2024-11-26 23:00:59.428714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.601 [2024-11-26 23:00:59.428740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.601 [2024-11-26 23:00:59.428754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.601 [2024-11-26 23:00:59.430580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.601 [2024-11-26 23:00:59.430615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.601 BaseBdev1 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 BaseBdev2_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 [2024-11-26 23:00:59.457983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:20.601 [2024-11-26 23:00:59.458045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.601 [2024-11-26 23:00:59.458064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.601 [2024-11-26 23:00:59.458074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.601 [2024-11-26 23:00:59.459896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.601 [2024-11-26 23:00:59.459945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:20.601 BaseBdev2 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 spare_malloc 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 spare_delay 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 [2024-11-26 23:00:59.517740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.601 [2024-11-26 23:00:59.517825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.601 [2024-11-26 23:00:59.517863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:20.601 [2024-11-26 23:00:59.517881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.601 [2024-11-26 23:00:59.521004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.601 [2024-11-26 23:00:59.521055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.601 spare 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 [2024-11-26 23:00:59.529980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.601 [2024-11-26 23:00:59.532125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.601 [2024-11-26 23:00:59.532314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:20.601 [2024-11-26 23:00:59.532331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:20.601 [2024-11-26 23:00:59.532402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:20.601 [2024-11-26 23:00:59.532503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:20.601 [2024-11-26 23:00:59.532523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:20.601 [2024-11-26 23:00:59.532601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.601 "name": "raid_bdev1", 00:16:20.601 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:20.601 "strip_size_kb": 0, 00:16:20.601 "state": "online", 00:16:20.601 "raid_level": "raid1", 00:16:20.601 "superblock": true, 00:16:20.601 "num_base_bdevs": 2, 00:16:20.601 "num_base_bdevs_discovered": 2, 00:16:20.601 "num_base_bdevs_operational": 2, 00:16:20.601 "base_bdevs_list": [ 00:16:20.601 { 00:16:20.601 "name": "BaseBdev1", 00:16:20.601 "uuid": "4fa22fa5-f7d2-5085-8d72-57d7a750e7db", 00:16:20.601 "is_configured": true, 00:16:20.601 "data_offset": 256, 00:16:20.601 "data_size": 7936 00:16:20.601 }, 00:16:20.601 { 00:16:20.601 "name": "BaseBdev2", 00:16:20.601 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:20.601 "is_configured": true, 00:16:20.601 "data_offset": 256, 00:16:20.601 "data_size": 7936 00:16:20.601 } 00:16:20.601 ] 00:16:20.601 }' 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.601 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.861 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.861 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:20.861 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.861 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.861 [2024-11-26 23:00:59.974298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.121 23:00:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.121 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.122 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:21.382 [2024-11-26 23:01:00.250122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:21.382 /dev/nbd0 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.382 1+0 records in 00:16:21.382 1+0 records out 00:16:21.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429929 s, 9.5 MB/s 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:21.382 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:21.951 7936+0 records in 00:16:21.951 7936+0 records out 00:16:21.951 32505856 bytes (33 MB, 31 MiB) copied, 0.637998 s, 50.9 MB/s 00:16:21.951 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:21.951 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.951 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.951 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.952 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:21.952 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.952 23:01:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:22.211 [2024-11-26 23:01:01.178209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:22.211 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.212 [2024-11-26 23:01:01.206363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.212 "name": "raid_bdev1", 00:16:22.212 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:22.212 "strip_size_kb": 0, 00:16:22.212 "state": "online", 00:16:22.212 "raid_level": "raid1", 00:16:22.212 "superblock": true, 00:16:22.212 "num_base_bdevs": 2, 00:16:22.212 "num_base_bdevs_discovered": 1, 00:16:22.212 "num_base_bdevs_operational": 1, 00:16:22.212 "base_bdevs_list": [ 00:16:22.212 { 00:16:22.212 "name": null, 00:16:22.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.212 "is_configured": false, 00:16:22.212 "data_offset": 0, 00:16:22.212 "data_size": 7936 00:16:22.212 }, 00:16:22.212 { 00:16:22.212 "name": "BaseBdev2", 00:16:22.212 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:22.212 "is_configured": true, 00:16:22.212 "data_offset": 256, 00:16:22.212 "data_size": 7936 00:16:22.212 } 00:16:22.212 ] 00:16:22.212 }' 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.212 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.781 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.781 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.781 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.782 [2024-11-26 23:01:01.638500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.782 [2024-11-26 23:01:01.641095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:22.782 [2024-11-26 23:01:01.642929] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.782 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.782 23:01:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.721 "name": "raid_bdev1", 00:16:23.721 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:23.721 "strip_size_kb": 0, 00:16:23.721 "state": "online", 00:16:23.721 "raid_level": "raid1", 00:16:23.721 "superblock": true, 00:16:23.721 "num_base_bdevs": 2, 00:16:23.721 "num_base_bdevs_discovered": 2, 00:16:23.721 "num_base_bdevs_operational": 2, 00:16:23.721 "process": { 00:16:23.721 "type": "rebuild", 00:16:23.721 "target": "spare", 00:16:23.721 "progress": { 00:16:23.721 "blocks": 2560, 00:16:23.721 "percent": 32 00:16:23.721 } 00:16:23.721 }, 00:16:23.721 "base_bdevs_list": [ 00:16:23.721 { 00:16:23.721 "name": "spare", 00:16:23.721 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:23.721 "is_configured": true, 00:16:23.721 "data_offset": 256, 00:16:23.721 "data_size": 7936 00:16:23.721 }, 00:16:23.721 { 00:16:23.721 "name": "BaseBdev2", 00:16:23.721 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:23.721 "is_configured": true, 00:16:23.721 "data_offset": 256, 00:16:23.721 "data_size": 7936 00:16:23.721 } 00:16:23.721 ] 00:16:23.721 }' 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.721 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.721 [2024-11-26 23:01:02.784590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.980 [2024-11-26 23:01:02.849550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.980 [2024-11-26 23:01:02.849622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.980 [2024-11-26 23:01:02.849636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.980 [2024-11-26 23:01:02.849647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.980 "name": "raid_bdev1", 00:16:23.980 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:23.980 "strip_size_kb": 0, 00:16:23.980 "state": "online", 00:16:23.980 "raid_level": "raid1", 00:16:23.980 "superblock": true, 00:16:23.980 "num_base_bdevs": 2, 00:16:23.980 "num_base_bdevs_discovered": 1, 00:16:23.980 "num_base_bdevs_operational": 1, 00:16:23.980 "base_bdevs_list": [ 00:16:23.980 { 00:16:23.980 "name": null, 00:16:23.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.980 "is_configured": false, 00:16:23.980 "data_offset": 0, 00:16:23.980 "data_size": 7936 00:16:23.980 }, 00:16:23.980 { 00:16:23.980 "name": "BaseBdev2", 00:16:23.980 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:23.980 "is_configured": true, 00:16:23.980 "data_offset": 256, 00:16:23.980 "data_size": 7936 00:16:23.980 } 00:16:23.980 ] 00:16:23.980 }' 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.980 23:01:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.240 "name": "raid_bdev1", 00:16:24.240 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:24.240 "strip_size_kb": 0, 00:16:24.240 "state": "online", 00:16:24.240 "raid_level": "raid1", 00:16:24.240 "superblock": true, 00:16:24.240 "num_base_bdevs": 2, 00:16:24.240 "num_base_bdevs_discovered": 1, 00:16:24.240 "num_base_bdevs_operational": 1, 00:16:24.240 "base_bdevs_list": [ 00:16:24.240 { 00:16:24.240 "name": null, 00:16:24.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.240 "is_configured": false, 00:16:24.240 "data_offset": 0, 00:16:24.240 "data_size": 7936 00:16:24.240 }, 00:16:24.240 { 00:16:24.240 "name": "BaseBdev2", 00:16:24.240 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:24.240 "is_configured": true, 00:16:24.240 "data_offset": 256, 00:16:24.240 "data_size": 7936 00:16:24.240 } 00:16:24.240 ] 00:16:24.240 }' 00:16:24.240 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.499 [2024-11-26 23:01:03.416962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.499 [2024-11-26 23:01:03.419216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:24.499 [2024-11-26 23:01:03.421064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.499 23:01:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.438 "name": "raid_bdev1", 00:16:25.438 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:25.438 "strip_size_kb": 0, 00:16:25.438 "state": "online", 00:16:25.438 "raid_level": "raid1", 00:16:25.438 "superblock": true, 00:16:25.438 "num_base_bdevs": 2, 00:16:25.438 "num_base_bdevs_discovered": 2, 00:16:25.438 "num_base_bdevs_operational": 2, 00:16:25.438 "process": { 00:16:25.438 "type": "rebuild", 00:16:25.438 "target": "spare", 00:16:25.438 "progress": { 00:16:25.438 "blocks": 2560, 00:16:25.438 "percent": 32 00:16:25.438 } 00:16:25.438 }, 00:16:25.438 "base_bdevs_list": [ 00:16:25.438 { 00:16:25.438 "name": "spare", 00:16:25.438 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:25.438 "is_configured": true, 00:16:25.438 "data_offset": 256, 00:16:25.438 "data_size": 7936 00:16:25.438 }, 00:16:25.438 { 00:16:25.438 "name": "BaseBdev2", 00:16:25.438 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:25.438 "is_configured": true, 00:16:25.438 "data_offset": 256, 00:16:25.438 "data_size": 7936 00:16:25.438 } 00:16:25.438 ] 00:16:25.438 }' 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.438 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:25.700 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.700 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.700 "name": "raid_bdev1", 00:16:25.700 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:25.700 "strip_size_kb": 0, 00:16:25.700 "state": "online", 00:16:25.700 "raid_level": "raid1", 00:16:25.700 "superblock": true, 00:16:25.700 "num_base_bdevs": 2, 00:16:25.700 "num_base_bdevs_discovered": 2, 00:16:25.700 "num_base_bdevs_operational": 2, 00:16:25.700 "process": { 00:16:25.700 "type": "rebuild", 00:16:25.700 "target": "spare", 00:16:25.700 "progress": { 00:16:25.700 "blocks": 2816, 00:16:25.700 "percent": 35 00:16:25.700 } 00:16:25.700 }, 00:16:25.700 "base_bdevs_list": [ 00:16:25.700 { 00:16:25.700 "name": "spare", 00:16:25.700 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:25.700 "is_configured": true, 00:16:25.700 "data_offset": 256, 00:16:25.700 "data_size": 7936 00:16:25.700 }, 00:16:25.700 { 00:16:25.701 "name": "BaseBdev2", 00:16:25.701 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:25.701 "is_configured": true, 00:16:25.701 "data_offset": 256, 00:16:25.701 "data_size": 7936 00:16:25.701 } 00:16:25.701 ] 00:16:25.701 }' 00:16:25.701 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.701 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.701 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.701 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.701 23:01:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.639 "name": "raid_bdev1", 00:16:26.639 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:26.639 "strip_size_kb": 0, 00:16:26.639 "state": "online", 00:16:26.639 "raid_level": "raid1", 00:16:26.639 "superblock": true, 00:16:26.639 "num_base_bdevs": 2, 00:16:26.639 "num_base_bdevs_discovered": 2, 00:16:26.639 "num_base_bdevs_operational": 2, 00:16:26.639 "process": { 00:16:26.639 "type": "rebuild", 00:16:26.639 "target": "spare", 00:16:26.639 "progress": { 00:16:26.639 "blocks": 5632, 00:16:26.639 "percent": 70 00:16:26.639 } 00:16:26.639 }, 00:16:26.639 "base_bdevs_list": [ 00:16:26.639 { 00:16:26.639 "name": "spare", 00:16:26.639 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:26.639 "is_configured": true, 00:16:26.639 "data_offset": 256, 00:16:26.639 "data_size": 7936 00:16:26.639 }, 00:16:26.639 { 00:16:26.639 "name": "BaseBdev2", 00:16:26.639 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:26.639 "is_configured": true, 00:16:26.639 "data_offset": 256, 00:16:26.639 "data_size": 7936 00:16:26.639 } 00:16:26.639 ] 00:16:26.639 }' 00:16:26.639 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.899 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.899 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.899 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.899 23:01:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.467 [2024-11-26 23:01:06.537005] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:27.467 [2024-11-26 23:01:06.537075] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:27.467 [2024-11-26 23:01:06.537154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.725 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.985 "name": "raid_bdev1", 00:16:27.985 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:27.985 "strip_size_kb": 0, 00:16:27.985 "state": "online", 00:16:27.985 "raid_level": "raid1", 00:16:27.985 "superblock": true, 00:16:27.985 "num_base_bdevs": 2, 00:16:27.985 "num_base_bdevs_discovered": 2, 00:16:27.985 "num_base_bdevs_operational": 2, 00:16:27.985 "base_bdevs_list": [ 00:16:27.985 { 00:16:27.985 "name": "spare", 00:16:27.985 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 }, 00:16:27.985 { 00:16:27.985 "name": "BaseBdev2", 00:16:27.985 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 } 00:16:27.985 ] 00:16:27.985 }' 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.985 23:01:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.985 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.985 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.985 "name": "raid_bdev1", 00:16:27.985 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:27.985 "strip_size_kb": 0, 00:16:27.985 "state": "online", 00:16:27.985 "raid_level": "raid1", 00:16:27.985 "superblock": true, 00:16:27.985 "num_base_bdevs": 2, 00:16:27.985 "num_base_bdevs_discovered": 2, 00:16:27.985 "num_base_bdevs_operational": 2, 00:16:27.985 "base_bdevs_list": [ 00:16:27.985 { 00:16:27.985 "name": "spare", 00:16:27.985 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 }, 00:16:27.985 { 00:16:27.985 "name": "BaseBdev2", 00:16:27.985 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 } 00:16:27.985 ] 00:16:27.985 }' 00:16:27.985 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.985 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.985 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.245 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.245 "name": "raid_bdev1", 00:16:28.246 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:28.246 "strip_size_kb": 0, 00:16:28.246 "state": "online", 00:16:28.246 "raid_level": "raid1", 00:16:28.246 "superblock": true, 00:16:28.246 "num_base_bdevs": 2, 00:16:28.246 "num_base_bdevs_discovered": 2, 00:16:28.246 "num_base_bdevs_operational": 2, 00:16:28.246 "base_bdevs_list": [ 00:16:28.246 { 00:16:28.246 "name": "spare", 00:16:28.246 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:28.246 "is_configured": true, 00:16:28.246 "data_offset": 256, 00:16:28.246 "data_size": 7936 00:16:28.246 }, 00:16:28.246 { 00:16:28.246 "name": "BaseBdev2", 00:16:28.246 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:28.246 "is_configured": true, 00:16:28.246 "data_offset": 256, 00:16:28.246 "data_size": 7936 00:16:28.246 } 00:16:28.246 ] 00:16:28.246 }' 00:16:28.246 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.246 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.506 [2024-11-26 23:01:07.559973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.506 [2024-11-26 23:01:07.560005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.506 [2024-11-26 23:01:07.560089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.506 [2024-11-26 23:01:07.560155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.506 [2024-11-26 23:01:07.560165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:28.506 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:28.765 /dev/nbd0 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.765 1+0 records in 00:16:28.765 1+0 records out 00:16:28.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425196 s, 9.6 MB/s 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:28.765 23:01:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:29.024 /dev/nbd1 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:29.025 1+0 records in 00:16:29.025 1+0 records out 00:16:29.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444568 s, 9.2 MB/s 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.025 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.284 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.544 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.544 [2024-11-26 23:01:08.634776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.544 [2024-11-26 23:01:08.634823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.545 [2024-11-26 23:01:08.634846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:29.545 [2024-11-26 23:01:08.634855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.545 [2024-11-26 23:01:08.636751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.545 [2024-11-26 23:01:08.636782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.545 [2024-11-26 23:01:08.636832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:29.545 [2024-11-26 23:01:08.636884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.545 [2024-11-26 23:01:08.636980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.545 spare 00:16:29.545 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.545 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:29.545 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.545 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.805 [2024-11-26 23:01:08.737037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:29.805 [2024-11-26 23:01:08.737067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:29.805 [2024-11-26 23:01:08.737160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:29.805 [2024-11-26 23:01:08.737277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:29.805 [2024-11-26 23:01:08.737287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:29.805 [2024-11-26 23:01:08.737372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.805 "name": "raid_bdev1", 00:16:29.805 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:29.805 "strip_size_kb": 0, 00:16:29.805 "state": "online", 00:16:29.805 "raid_level": "raid1", 00:16:29.805 "superblock": true, 00:16:29.805 "num_base_bdevs": 2, 00:16:29.805 "num_base_bdevs_discovered": 2, 00:16:29.805 "num_base_bdevs_operational": 2, 00:16:29.805 "base_bdevs_list": [ 00:16:29.805 { 00:16:29.805 "name": "spare", 00:16:29.805 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:29.805 "is_configured": true, 00:16:29.805 "data_offset": 256, 00:16:29.805 "data_size": 7936 00:16:29.805 }, 00:16:29.805 { 00:16:29.805 "name": "BaseBdev2", 00:16:29.805 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:29.805 "is_configured": true, 00:16:29.805 "data_offset": 256, 00:16:29.805 "data_size": 7936 00:16:29.805 } 00:16:29.805 ] 00:16:29.805 }' 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.805 23:01:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.373 "name": "raid_bdev1", 00:16:30.373 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:30.373 "strip_size_kb": 0, 00:16:30.373 "state": "online", 00:16:30.373 "raid_level": "raid1", 00:16:30.373 "superblock": true, 00:16:30.373 "num_base_bdevs": 2, 00:16:30.373 "num_base_bdevs_discovered": 2, 00:16:30.373 "num_base_bdevs_operational": 2, 00:16:30.373 "base_bdevs_list": [ 00:16:30.373 { 00:16:30.373 "name": "spare", 00:16:30.373 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:30.373 "is_configured": true, 00:16:30.373 "data_offset": 256, 00:16:30.373 "data_size": 7936 00:16:30.373 }, 00:16:30.373 { 00:16:30.373 "name": "BaseBdev2", 00:16:30.373 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:30.373 "is_configured": true, 00:16:30.373 "data_offset": 256, 00:16:30.373 "data_size": 7936 00:16:30.373 } 00:16:30.373 ] 00:16:30.373 }' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.373 [2024-11-26 23:01:09.403037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.373 "name": "raid_bdev1", 00:16:30.373 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:30.373 "strip_size_kb": 0, 00:16:30.373 "state": "online", 00:16:30.373 "raid_level": "raid1", 00:16:30.373 "superblock": true, 00:16:30.373 "num_base_bdevs": 2, 00:16:30.373 "num_base_bdevs_discovered": 1, 00:16:30.373 "num_base_bdevs_operational": 1, 00:16:30.373 "base_bdevs_list": [ 00:16:30.373 { 00:16:30.373 "name": null, 00:16:30.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.373 "is_configured": false, 00:16:30.373 "data_offset": 0, 00:16:30.373 "data_size": 7936 00:16:30.373 }, 00:16:30.373 { 00:16:30.373 "name": "BaseBdev2", 00:16:30.373 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:30.373 "is_configured": true, 00:16:30.373 "data_offset": 256, 00:16:30.373 "data_size": 7936 00:16:30.373 } 00:16:30.373 ] 00:16:30.373 }' 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.373 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.943 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.943 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.943 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.943 [2024-11-26 23:01:09.871187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.943 [2024-11-26 23:01:09.871383] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.943 [2024-11-26 23:01:09.871409] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:30.943 [2024-11-26 23:01:09.871439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.943 [2024-11-26 23:01:09.873858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:30.943 [2024-11-26 23:01:09.875700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.943 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.943 23:01:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.883 "name": "raid_bdev1", 00:16:31.883 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:31.883 "strip_size_kb": 0, 00:16:31.883 "state": "online", 00:16:31.883 "raid_level": "raid1", 00:16:31.883 "superblock": true, 00:16:31.883 "num_base_bdevs": 2, 00:16:31.883 "num_base_bdevs_discovered": 2, 00:16:31.883 "num_base_bdevs_operational": 2, 00:16:31.883 "process": { 00:16:31.883 "type": "rebuild", 00:16:31.883 "target": "spare", 00:16:31.883 "progress": { 00:16:31.883 "blocks": 2560, 00:16:31.883 "percent": 32 00:16:31.883 } 00:16:31.883 }, 00:16:31.883 "base_bdevs_list": [ 00:16:31.883 { 00:16:31.883 "name": "spare", 00:16:31.883 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:31.883 "is_configured": true, 00:16:31.883 "data_offset": 256, 00:16:31.883 "data_size": 7936 00:16:31.883 }, 00:16:31.883 { 00:16:31.883 "name": "BaseBdev2", 00:16:31.883 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:31.883 "is_configured": true, 00:16:31.883 "data_offset": 256, 00:16:31.883 "data_size": 7936 00:16:31.883 } 00:16:31.883 ] 00:16:31.883 }' 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.883 23:01:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.144 [2024-11-26 23:01:11.034014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.144 [2024-11-26 23:01:11.081949] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:32.144 [2024-11-26 23:01:11.082000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.144 [2024-11-26 23:01:11.082013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.144 [2024-11-26 23:01:11.082022] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.144 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.144 "name": "raid_bdev1", 00:16:32.144 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:32.144 "strip_size_kb": 0, 00:16:32.144 "state": "online", 00:16:32.144 "raid_level": "raid1", 00:16:32.144 "superblock": true, 00:16:32.144 "num_base_bdevs": 2, 00:16:32.144 "num_base_bdevs_discovered": 1, 00:16:32.144 "num_base_bdevs_operational": 1, 00:16:32.144 "base_bdevs_list": [ 00:16:32.144 { 00:16:32.144 "name": null, 00:16:32.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.144 "is_configured": false, 00:16:32.144 "data_offset": 0, 00:16:32.144 "data_size": 7936 00:16:32.144 }, 00:16:32.144 { 00:16:32.144 "name": "BaseBdev2", 00:16:32.144 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:32.144 "is_configured": true, 00:16:32.144 "data_offset": 256, 00:16:32.144 "data_size": 7936 00:16:32.144 } 00:16:32.144 ] 00:16:32.145 }' 00:16:32.145 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.145 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.715 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.715 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.715 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.715 [2024-11-26 23:01:11.566711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.715 [2024-11-26 23:01:11.566768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.715 [2024-11-26 23:01:11.566793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:32.715 [2024-11-26 23:01:11.566804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.715 [2024-11-26 23:01:11.567023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.715 [2024-11-26 23:01:11.567039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.715 [2024-11-26 23:01:11.567095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.715 [2024-11-26 23:01:11.567115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.715 [2024-11-26 23:01:11.567123] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:32.715 [2024-11-26 23:01:11.567168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.715 [2024-11-26 23:01:11.569525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:32.715 [2024-11-26 23:01:11.571339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.715 spare 00:16:32.716 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.716 23:01:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.653 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.653 "name": "raid_bdev1", 00:16:33.653 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:33.653 "strip_size_kb": 0, 00:16:33.653 "state": "online", 00:16:33.653 "raid_level": "raid1", 00:16:33.653 "superblock": true, 00:16:33.653 "num_base_bdevs": 2, 00:16:33.653 "num_base_bdevs_discovered": 2, 00:16:33.653 "num_base_bdevs_operational": 2, 00:16:33.653 "process": { 00:16:33.653 "type": "rebuild", 00:16:33.653 "target": "spare", 00:16:33.653 "progress": { 00:16:33.653 "blocks": 2560, 00:16:33.653 "percent": 32 00:16:33.653 } 00:16:33.653 }, 00:16:33.653 "base_bdevs_list": [ 00:16:33.653 { 00:16:33.653 "name": "spare", 00:16:33.653 "uuid": "43b90446-6b9f-5fcf-9551-34dfde6cc5cd", 00:16:33.653 "is_configured": true, 00:16:33.653 "data_offset": 256, 00:16:33.653 "data_size": 7936 00:16:33.653 }, 00:16:33.653 { 00:16:33.653 "name": "BaseBdev2", 00:16:33.653 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:33.653 "is_configured": true, 00:16:33.653 "data_offset": 256, 00:16:33.653 "data_size": 7936 00:16:33.653 } 00:16:33.653 ] 00:16:33.653 }' 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.654 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.654 [2024-11-26 23:01:12.725577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.654 [2024-11-26 23:01:12.777524] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.654 [2024-11-26 23:01:12.777575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.654 [2024-11-26 23:01:12.777591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.654 [2024-11-26 23:01:12.777598] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.914 "name": "raid_bdev1", 00:16:33.914 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:33.914 "strip_size_kb": 0, 00:16:33.914 "state": "online", 00:16:33.914 "raid_level": "raid1", 00:16:33.914 "superblock": true, 00:16:33.914 "num_base_bdevs": 2, 00:16:33.914 "num_base_bdevs_discovered": 1, 00:16:33.914 "num_base_bdevs_operational": 1, 00:16:33.914 "base_bdevs_list": [ 00:16:33.914 { 00:16:33.914 "name": null, 00:16:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.914 "is_configured": false, 00:16:33.914 "data_offset": 0, 00:16:33.914 "data_size": 7936 00:16:33.914 }, 00:16:33.914 { 00:16:33.914 "name": "BaseBdev2", 00:16:33.914 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:33.914 "is_configured": true, 00:16:33.914 "data_offset": 256, 00:16:33.914 "data_size": 7936 00:16:33.914 } 00:16:33.914 ] 00:16:33.914 }' 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.914 23:01:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.175 "name": "raid_bdev1", 00:16:34.175 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:34.175 "strip_size_kb": 0, 00:16:34.175 "state": "online", 00:16:34.175 "raid_level": "raid1", 00:16:34.175 "superblock": true, 00:16:34.175 "num_base_bdevs": 2, 00:16:34.175 "num_base_bdevs_discovered": 1, 00:16:34.175 "num_base_bdevs_operational": 1, 00:16:34.175 "base_bdevs_list": [ 00:16:34.175 { 00:16:34.175 "name": null, 00:16:34.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.175 "is_configured": false, 00:16:34.175 "data_offset": 0, 00:16:34.175 "data_size": 7936 00:16:34.175 }, 00:16:34.175 { 00:16:34.175 "name": "BaseBdev2", 00:16:34.175 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:34.175 "is_configured": true, 00:16:34.175 "data_offset": 256, 00:16:34.175 "data_size": 7936 00:16:34.175 } 00:16:34.175 ] 00:16:34.175 }' 00:16:34.175 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.435 [2024-11-26 23:01:13.374452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:34.435 [2024-11-26 23:01:13.374495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.435 [2024-11-26 23:01:13.374514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:34.435 [2024-11-26 23:01:13.374523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.435 [2024-11-26 23:01:13.374714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.435 [2024-11-26 23:01:13.374728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.435 [2024-11-26 23:01:13.374781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:34.435 [2024-11-26 23:01:13.374795] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.435 [2024-11-26 23:01:13.374804] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:34.435 [2024-11-26 23:01:13.374821] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:34.435 BaseBdev1 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.435 23:01:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.374 "name": "raid_bdev1", 00:16:35.374 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:35.374 "strip_size_kb": 0, 00:16:35.374 "state": "online", 00:16:35.374 "raid_level": "raid1", 00:16:35.374 "superblock": true, 00:16:35.374 "num_base_bdevs": 2, 00:16:35.374 "num_base_bdevs_discovered": 1, 00:16:35.374 "num_base_bdevs_operational": 1, 00:16:35.374 "base_bdevs_list": [ 00:16:35.374 { 00:16:35.374 "name": null, 00:16:35.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.374 "is_configured": false, 00:16:35.374 "data_offset": 0, 00:16:35.374 "data_size": 7936 00:16:35.374 }, 00:16:35.374 { 00:16:35.374 "name": "BaseBdev2", 00:16:35.374 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:35.374 "is_configured": true, 00:16:35.374 "data_offset": 256, 00:16:35.374 "data_size": 7936 00:16:35.374 } 00:16:35.374 ] 00:16:35.374 }' 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.374 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.944 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.944 "name": "raid_bdev1", 00:16:35.944 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:35.944 "strip_size_kb": 0, 00:16:35.944 "state": "online", 00:16:35.944 "raid_level": "raid1", 00:16:35.944 "superblock": true, 00:16:35.944 "num_base_bdevs": 2, 00:16:35.944 "num_base_bdevs_discovered": 1, 00:16:35.944 "num_base_bdevs_operational": 1, 00:16:35.944 "base_bdevs_list": [ 00:16:35.944 { 00:16:35.944 "name": null, 00:16:35.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.944 "is_configured": false, 00:16:35.944 "data_offset": 0, 00:16:35.944 "data_size": 7936 00:16:35.944 }, 00:16:35.944 { 00:16:35.944 "name": "BaseBdev2", 00:16:35.944 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:35.944 "is_configured": true, 00:16:35.944 "data_offset": 256, 00:16:35.944 "data_size": 7936 00:16:35.944 } 00:16:35.944 ] 00:16:35.944 }' 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.945 23:01:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.945 [2024-11-26 23:01:14.994950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.945 [2024-11-26 23:01:14.995120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.945 [2024-11-26 23:01:14.995134] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:35.945 request: 00:16:35.945 { 00:16:35.945 "base_bdev": "BaseBdev1", 00:16:35.945 "raid_bdev": "raid_bdev1", 00:16:35.945 "method": "bdev_raid_add_base_bdev", 00:16:35.945 "req_id": 1 00:16:35.945 } 00:16:35.945 Got JSON-RPC error response 00:16:35.945 response: 00:16:35.945 { 00:16:35.945 "code": -22, 00:16:35.945 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:35.945 } 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.945 23:01:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.322 "name": "raid_bdev1", 00:16:37.322 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:37.322 "strip_size_kb": 0, 00:16:37.322 "state": "online", 00:16:37.322 "raid_level": "raid1", 00:16:37.322 "superblock": true, 00:16:37.322 "num_base_bdevs": 2, 00:16:37.322 "num_base_bdevs_discovered": 1, 00:16:37.322 "num_base_bdevs_operational": 1, 00:16:37.322 "base_bdevs_list": [ 00:16:37.322 { 00:16:37.322 "name": null, 00:16:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.322 "is_configured": false, 00:16:37.322 "data_offset": 0, 00:16:37.322 "data_size": 7936 00:16:37.322 }, 00:16:37.322 { 00:16:37.322 "name": "BaseBdev2", 00:16:37.322 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:37.322 "is_configured": true, 00:16:37.322 "data_offset": 256, 00:16:37.322 "data_size": 7936 00:16:37.322 } 00:16:37.322 ] 00:16:37.322 }' 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.322 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.580 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.581 "name": "raid_bdev1", 00:16:37.581 "uuid": "90da2b67-0a92-4390-a861-8919b49aab29", 00:16:37.581 "strip_size_kb": 0, 00:16:37.581 "state": "online", 00:16:37.581 "raid_level": "raid1", 00:16:37.581 "superblock": true, 00:16:37.581 "num_base_bdevs": 2, 00:16:37.581 "num_base_bdevs_discovered": 1, 00:16:37.581 "num_base_bdevs_operational": 1, 00:16:37.581 "base_bdevs_list": [ 00:16:37.581 { 00:16:37.581 "name": null, 00:16:37.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.581 "is_configured": false, 00:16:37.581 "data_offset": 0, 00:16:37.581 "data_size": 7936 00:16:37.581 }, 00:16:37.581 { 00:16:37.581 "name": "BaseBdev2", 00:16:37.581 "uuid": "5aef948b-5e50-571b-ad43-841178e3f89f", 00:16:37.581 "is_configured": true, 00:16:37.581 "data_offset": 256, 00:16:37.581 "data_size": 7936 00:16:37.581 } 00:16:37.581 ] 00:16:37.581 }' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99716 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99716 ']' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99716 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99716 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99716' 00:16:37.581 killing process with pid 99716 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99716 00:16:37.581 Received shutdown signal, test time was about 60.000000 seconds 00:16:37.581 00:16:37.581 Latency(us) 00:16:37.581 [2024-11-26T23:01:16.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.581 [2024-11-26T23:01:16.709Z] =================================================================================================================== 00:16:37.581 [2024-11-26T23:01:16.709Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.581 [2024-11-26 23:01:16.608497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.581 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99716 00:16:37.581 [2024-11-26 23:01:16.608647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.581 [2024-11-26 23:01:16.608700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.581 [2024-11-26 23:01:16.608714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.581 [2024-11-26 23:01:16.642167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.841 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:37.841 00:16:37.841 real 0m18.347s 00:16:37.841 user 0m24.273s 00:16:37.841 sys 0m2.714s 00:16:37.841 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.841 23:01:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.841 ************************************ 00:16:37.841 END TEST raid_rebuild_test_sb_md_separate 00:16:37.841 ************************************ 00:16:37.841 23:01:16 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:37.841 23:01:16 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:37.841 23:01:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:37.841 23:01:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.841 23:01:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.841 ************************************ 00:16:37.841 START TEST raid_state_function_test_sb_md_interleaved 00:16:37.841 ************************************ 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100395 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:37.841 Process raid pid: 100395 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100395' 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100395 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100395 ']' 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.841 23:01:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.100 [2024-11-26 23:01:17.038803] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:38.101 [2024-11-26 23:01:17.038948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.101 [2024-11-26 23:01:17.180553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:38.101 [2024-11-26 23:01:17.218204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.359 [2024-11-26 23:01:17.245223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.359 [2024-11-26 23:01:17.288523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.359 [2024-11-26 23:01:17.288572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.926 [2024-11-26 23:01:17.848580] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.926 [2024-11-26 23:01:17.848628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.926 [2024-11-26 23:01:17.848640] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.926 [2024-11-26 23:01:17.848647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.926 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.926 "name": "Existed_Raid", 00:16:38.926 "uuid": "3562346d-5bbb-4d2a-8fde-5227aefc78a0", 00:16:38.926 "strip_size_kb": 0, 00:16:38.926 "state": "configuring", 00:16:38.926 "raid_level": "raid1", 00:16:38.926 "superblock": true, 00:16:38.926 "num_base_bdevs": 2, 00:16:38.926 "num_base_bdevs_discovered": 0, 00:16:38.926 "num_base_bdevs_operational": 2, 00:16:38.926 "base_bdevs_list": [ 00:16:38.926 { 00:16:38.926 "name": "BaseBdev1", 00:16:38.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.927 "is_configured": false, 00:16:38.927 "data_offset": 0, 00:16:38.927 "data_size": 0 00:16:38.927 }, 00:16:38.927 { 00:16:38.927 "name": "BaseBdev2", 00:16:38.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.927 "is_configured": false, 00:16:38.927 "data_offset": 0, 00:16:38.927 "data_size": 0 00:16:38.927 } 00:16:38.927 ] 00:16:38.927 }' 00:16:38.927 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.927 23:01:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-11-26 23:01:18.320608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.496 [2024-11-26 23:01:18.320653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-11-26 23:01:18.332640] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.496 [2024-11-26 23:01:18.332672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.496 [2024-11-26 23:01:18.332682] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.496 [2024-11-26 23:01:18.332689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-11-26 23:01:18.353666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.496 BaseBdev1 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.496 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.497 [ 00:16:39.497 { 00:16:39.497 "name": "BaseBdev1", 00:16:39.497 "aliases": [ 00:16:39.497 "c0437bf2-0510-41a1-82d2-4ebe2c324c31" 00:16:39.497 ], 00:16:39.497 "product_name": "Malloc disk", 00:16:39.497 "block_size": 4128, 00:16:39.497 "num_blocks": 8192, 00:16:39.497 "uuid": "c0437bf2-0510-41a1-82d2-4ebe2c324c31", 00:16:39.497 "md_size": 32, 00:16:39.497 "md_interleave": true, 00:16:39.497 "dif_type": 0, 00:16:39.497 "assigned_rate_limits": { 00:16:39.497 "rw_ios_per_sec": 0, 00:16:39.497 "rw_mbytes_per_sec": 0, 00:16:39.497 "r_mbytes_per_sec": 0, 00:16:39.497 "w_mbytes_per_sec": 0 00:16:39.497 }, 00:16:39.497 "claimed": true, 00:16:39.497 "claim_type": "exclusive_write", 00:16:39.497 "zoned": false, 00:16:39.497 "supported_io_types": { 00:16:39.497 "read": true, 00:16:39.497 "write": true, 00:16:39.497 "unmap": true, 00:16:39.497 "flush": true, 00:16:39.497 "reset": true, 00:16:39.497 "nvme_admin": false, 00:16:39.497 "nvme_io": false, 00:16:39.497 "nvme_io_md": false, 00:16:39.497 "write_zeroes": true, 00:16:39.497 "zcopy": true, 00:16:39.497 "get_zone_info": false, 00:16:39.497 "zone_management": false, 00:16:39.497 "zone_append": false, 00:16:39.497 "compare": false, 00:16:39.497 "compare_and_write": false, 00:16:39.497 "abort": true, 00:16:39.497 "seek_hole": false, 00:16:39.497 "seek_data": false, 00:16:39.497 "copy": true, 00:16:39.497 "nvme_iov_md": false 00:16:39.497 }, 00:16:39.497 "memory_domains": [ 00:16:39.497 { 00:16:39.497 "dma_device_id": "system", 00:16:39.497 "dma_device_type": 1 00:16:39.497 }, 00:16:39.497 { 00:16:39.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.497 "dma_device_type": 2 00:16:39.497 } 00:16:39.497 ], 00:16:39.497 "driver_specific": {} 00:16:39.497 } 00:16:39.497 ] 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.497 "name": "Existed_Raid", 00:16:39.497 "uuid": "e0f48c7e-cc0f-4d7a-badc-5ce2426e9a38", 00:16:39.497 "strip_size_kb": 0, 00:16:39.497 "state": "configuring", 00:16:39.497 "raid_level": "raid1", 00:16:39.497 "superblock": true, 00:16:39.497 "num_base_bdevs": 2, 00:16:39.497 "num_base_bdevs_discovered": 1, 00:16:39.497 "num_base_bdevs_operational": 2, 00:16:39.497 "base_bdevs_list": [ 00:16:39.497 { 00:16:39.497 "name": "BaseBdev1", 00:16:39.497 "uuid": "c0437bf2-0510-41a1-82d2-4ebe2c324c31", 00:16:39.497 "is_configured": true, 00:16:39.497 "data_offset": 256, 00:16:39.497 "data_size": 7936 00:16:39.497 }, 00:16:39.497 { 00:16:39.497 "name": "BaseBdev2", 00:16:39.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.497 "is_configured": false, 00:16:39.497 "data_offset": 0, 00:16:39.497 "data_size": 0 00:16:39.497 } 00:16:39.497 ] 00:16:39.497 }' 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.497 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 [2024-11-26 23:01:18.817814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.757 [2024-11-26 23:01:18.817874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 [2024-11-26 23:01:18.829885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.757 [2024-11-26 23:01:18.831674] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.757 [2024-11-26 23:01:18.831707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.757 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.758 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.758 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.758 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.758 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.018 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.018 "name": "Existed_Raid", 00:16:40.018 "uuid": "54a80284-9f81-4d89-89e0-ffa8ef6fa410", 00:16:40.018 "strip_size_kb": 0, 00:16:40.018 "state": "configuring", 00:16:40.018 "raid_level": "raid1", 00:16:40.018 "superblock": true, 00:16:40.018 "num_base_bdevs": 2, 00:16:40.018 "num_base_bdevs_discovered": 1, 00:16:40.018 "num_base_bdevs_operational": 2, 00:16:40.018 "base_bdevs_list": [ 00:16:40.018 { 00:16:40.018 "name": "BaseBdev1", 00:16:40.018 "uuid": "c0437bf2-0510-41a1-82d2-4ebe2c324c31", 00:16:40.018 "is_configured": true, 00:16:40.018 "data_offset": 256, 00:16:40.018 "data_size": 7936 00:16:40.018 }, 00:16:40.018 { 00:16:40.018 "name": "BaseBdev2", 00:16:40.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.018 "is_configured": false, 00:16:40.018 "data_offset": 0, 00:16:40.018 "data_size": 0 00:16:40.018 } 00:16:40.018 ] 00:16:40.018 }' 00:16:40.018 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.018 23:01:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.278 [2024-11-26 23:01:19.313277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.278 [2024-11-26 23:01:19.313466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:40.278 [2024-11-26 23:01:19.313491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:40.278 [2024-11-26 23:01:19.313575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:40.278 [2024-11-26 23:01:19.313652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:40.278 [2024-11-26 23:01:19.313683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:40.278 [2024-11-26 23:01:19.313747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.278 BaseBdev2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.278 [ 00:16:40.278 { 00:16:40.278 "name": "BaseBdev2", 00:16:40.278 "aliases": [ 00:16:40.278 "a2217419-c862-4528-8a53-21cb3ad8d56b" 00:16:40.278 ], 00:16:40.278 "product_name": "Malloc disk", 00:16:40.278 "block_size": 4128, 00:16:40.278 "num_blocks": 8192, 00:16:40.278 "uuid": "a2217419-c862-4528-8a53-21cb3ad8d56b", 00:16:40.278 "md_size": 32, 00:16:40.278 "md_interleave": true, 00:16:40.278 "dif_type": 0, 00:16:40.278 "assigned_rate_limits": { 00:16:40.278 "rw_ios_per_sec": 0, 00:16:40.278 "rw_mbytes_per_sec": 0, 00:16:40.278 "r_mbytes_per_sec": 0, 00:16:40.278 "w_mbytes_per_sec": 0 00:16:40.278 }, 00:16:40.278 "claimed": true, 00:16:40.278 "claim_type": "exclusive_write", 00:16:40.278 "zoned": false, 00:16:40.278 "supported_io_types": { 00:16:40.278 "read": true, 00:16:40.278 "write": true, 00:16:40.278 "unmap": true, 00:16:40.278 "flush": true, 00:16:40.278 "reset": true, 00:16:40.278 "nvme_admin": false, 00:16:40.278 "nvme_io": false, 00:16:40.278 "nvme_io_md": false, 00:16:40.278 "write_zeroes": true, 00:16:40.278 "zcopy": true, 00:16:40.278 "get_zone_info": false, 00:16:40.278 "zone_management": false, 00:16:40.278 "zone_append": false, 00:16:40.278 "compare": false, 00:16:40.278 "compare_and_write": false, 00:16:40.278 "abort": true, 00:16:40.278 "seek_hole": false, 00:16:40.278 "seek_data": false, 00:16:40.278 "copy": true, 00:16:40.278 "nvme_iov_md": false 00:16:40.278 }, 00:16:40.278 "memory_domains": [ 00:16:40.278 { 00:16:40.278 "dma_device_id": "system", 00:16:40.278 "dma_device_type": 1 00:16:40.278 }, 00:16:40.278 { 00:16:40.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.278 "dma_device_type": 2 00:16:40.278 } 00:16:40.278 ], 00:16:40.278 "driver_specific": {} 00:16:40.278 } 00:16:40.278 ] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.278 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.278 "name": "Existed_Raid", 00:16:40.278 "uuid": "54a80284-9f81-4d89-89e0-ffa8ef6fa410", 00:16:40.278 "strip_size_kb": 0, 00:16:40.278 "state": "online", 00:16:40.278 "raid_level": "raid1", 00:16:40.278 "superblock": true, 00:16:40.278 "num_base_bdevs": 2, 00:16:40.278 "num_base_bdevs_discovered": 2, 00:16:40.278 "num_base_bdevs_operational": 2, 00:16:40.278 "base_bdevs_list": [ 00:16:40.278 { 00:16:40.278 "name": "BaseBdev1", 00:16:40.278 "uuid": "c0437bf2-0510-41a1-82d2-4ebe2c324c31", 00:16:40.278 "is_configured": true, 00:16:40.278 "data_offset": 256, 00:16:40.278 "data_size": 7936 00:16:40.278 }, 00:16:40.278 { 00:16:40.278 "name": "BaseBdev2", 00:16:40.278 "uuid": "a2217419-c862-4528-8a53-21cb3ad8d56b", 00:16:40.278 "is_configured": true, 00:16:40.278 "data_offset": 256, 00:16:40.279 "data_size": 7936 00:16:40.279 } 00:16:40.279 ] 00:16:40.279 }' 00:16:40.279 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.279 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 [2024-11-26 23:01:19.781682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.848 "name": "Existed_Raid", 00:16:40.848 "aliases": [ 00:16:40.848 "54a80284-9f81-4d89-89e0-ffa8ef6fa410" 00:16:40.849 ], 00:16:40.849 "product_name": "Raid Volume", 00:16:40.849 "block_size": 4128, 00:16:40.849 "num_blocks": 7936, 00:16:40.849 "uuid": "54a80284-9f81-4d89-89e0-ffa8ef6fa410", 00:16:40.849 "md_size": 32, 00:16:40.849 "md_interleave": true, 00:16:40.849 "dif_type": 0, 00:16:40.849 "assigned_rate_limits": { 00:16:40.849 "rw_ios_per_sec": 0, 00:16:40.849 "rw_mbytes_per_sec": 0, 00:16:40.849 "r_mbytes_per_sec": 0, 00:16:40.849 "w_mbytes_per_sec": 0 00:16:40.849 }, 00:16:40.849 "claimed": false, 00:16:40.849 "zoned": false, 00:16:40.849 "supported_io_types": { 00:16:40.849 "read": true, 00:16:40.849 "write": true, 00:16:40.849 "unmap": false, 00:16:40.849 "flush": false, 00:16:40.849 "reset": true, 00:16:40.849 "nvme_admin": false, 00:16:40.849 "nvme_io": false, 00:16:40.849 "nvme_io_md": false, 00:16:40.849 "write_zeroes": true, 00:16:40.849 "zcopy": false, 00:16:40.849 "get_zone_info": false, 00:16:40.849 "zone_management": false, 00:16:40.849 "zone_append": false, 00:16:40.849 "compare": false, 00:16:40.849 "compare_and_write": false, 00:16:40.849 "abort": false, 00:16:40.849 "seek_hole": false, 00:16:40.849 "seek_data": false, 00:16:40.849 "copy": false, 00:16:40.849 "nvme_iov_md": false 00:16:40.849 }, 00:16:40.849 "memory_domains": [ 00:16:40.849 { 00:16:40.849 "dma_device_id": "system", 00:16:40.849 "dma_device_type": 1 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.849 "dma_device_type": 2 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "dma_device_id": "system", 00:16:40.849 "dma_device_type": 1 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.849 "dma_device_type": 2 00:16:40.849 } 00:16:40.849 ], 00:16:40.849 "driver_specific": { 00:16:40.849 "raid": { 00:16:40.849 "uuid": "54a80284-9f81-4d89-89e0-ffa8ef6fa410", 00:16:40.849 "strip_size_kb": 0, 00:16:40.849 "state": "online", 00:16:40.849 "raid_level": "raid1", 00:16:40.849 "superblock": true, 00:16:40.849 "num_base_bdevs": 2, 00:16:40.849 "num_base_bdevs_discovered": 2, 00:16:40.849 "num_base_bdevs_operational": 2, 00:16:40.849 "base_bdevs_list": [ 00:16:40.849 { 00:16:40.849 "name": "BaseBdev1", 00:16:40.849 "uuid": "c0437bf2-0510-41a1-82d2-4ebe2c324c31", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 256, 00:16:40.849 "data_size": 7936 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev2", 00:16:40.849 "uuid": "a2217419-c862-4528-8a53-21cb3ad8d56b", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 256, 00:16:40.849 "data_size": 7936 00:16:40.849 } 00:16:40.849 ] 00:16:40.849 } 00:16:40.849 } 00:16:40.849 }' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:40.849 BaseBdev2' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.849 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.109 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:41.109 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:41.109 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:41.109 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.109 23:01:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.109 [2024-11-26 23:01:19.993575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.109 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.110 "name": "Existed_Raid", 00:16:41.110 "uuid": "54a80284-9f81-4d89-89e0-ffa8ef6fa410", 00:16:41.110 "strip_size_kb": 0, 00:16:41.110 "state": "online", 00:16:41.110 "raid_level": "raid1", 00:16:41.110 "superblock": true, 00:16:41.110 "num_base_bdevs": 2, 00:16:41.110 "num_base_bdevs_discovered": 1, 00:16:41.110 "num_base_bdevs_operational": 1, 00:16:41.110 "base_bdevs_list": [ 00:16:41.110 { 00:16:41.110 "name": null, 00:16:41.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.110 "is_configured": false, 00:16:41.110 "data_offset": 0, 00:16:41.110 "data_size": 7936 00:16:41.110 }, 00:16:41.110 { 00:16:41.110 "name": "BaseBdev2", 00:16:41.110 "uuid": "a2217419-c862-4528-8a53-21cb3ad8d56b", 00:16:41.110 "is_configured": true, 00:16:41.110 "data_offset": 256, 00:16:41.110 "data_size": 7936 00:16:41.110 } 00:16:41.110 ] 00:16:41.110 }' 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.110 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.370 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.631 [2024-11-26 23:01:20.517331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.631 [2024-11-26 23:01:20.517467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.631 [2024-11-26 23:01:20.529428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.631 [2024-11-26 23:01:20.529551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.631 [2024-11-26 23:01:20.529588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100395 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100395 ']' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100395 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100395 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.631 killing process with pid 100395 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100395' 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100395 00:16:41.631 [2024-11-26 23:01:20.627782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.631 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100395 00:16:41.631 [2024-11-26 23:01:20.628726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.892 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:41.892 00:16:41.892 real 0m3.920s 00:16:41.892 user 0m6.148s 00:16:41.892 sys 0m0.873s 00:16:41.892 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.892 23:01:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.892 ************************************ 00:16:41.893 END TEST raid_state_function_test_sb_md_interleaved 00:16:41.893 ************************************ 00:16:41.893 23:01:20 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:41.893 23:01:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:41.893 23:01:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.893 23:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.893 ************************************ 00:16:41.893 START TEST raid_superblock_test_md_interleaved 00:16:41.893 ************************************ 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100636 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100636 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100636 ']' 00:16:41.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.893 23:01:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.162 [2024-11-26 23:01:21.037120] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:42.162 [2024-11-26 23:01:21.037301] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100636 ] 00:16:42.162 [2024-11-26 23:01:21.177885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.162 [2024-11-26 23:01:21.215574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.162 [2024-11-26 23:01:21.241236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.162 [2024-11-26 23:01:21.284888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.162 [2024-11-26 23:01:21.284927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.766 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 malloc1 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 [2024-11-26 23:01:21.886529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.048 [2024-11-26 23:01:21.886589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.048 [2024-11-26 23:01:21.886631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:43.048 [2024-11-26 23:01:21.886646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.048 [2024-11-26 23:01:21.888579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.048 [2024-11-26 23:01:21.888617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.048 pt1 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 malloc2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 [2024-11-26 23:01:21.915619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.048 [2024-11-26 23:01:21.915712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.048 [2024-11-26 23:01:21.915748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:43.048 [2024-11-26 23:01:21.915777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.048 [2024-11-26 23:01:21.917606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.048 [2024-11-26 23:01:21.917673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.048 pt2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 [2024-11-26 23:01:21.927642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.048 [2024-11-26 23:01:21.929418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.048 [2024-11-26 23:01:21.929600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:43.048 [2024-11-26 23:01:21.929649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:43.048 [2024-11-26 23:01:21.929765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:43.048 [2024-11-26 23:01:21.929875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:43.048 [2024-11-26 23:01:21.929917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:43.048 [2024-11-26 23:01:21.930023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.048 "name": "raid_bdev1", 00:16:43.048 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:43.048 "strip_size_kb": 0, 00:16:43.048 "state": "online", 00:16:43.048 "raid_level": "raid1", 00:16:43.048 "superblock": true, 00:16:43.048 "num_base_bdevs": 2, 00:16:43.048 "num_base_bdevs_discovered": 2, 00:16:43.048 "num_base_bdevs_operational": 2, 00:16:43.048 "base_bdevs_list": [ 00:16:43.048 { 00:16:43.048 "name": "pt1", 00:16:43.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 256, 00:16:43.048 "data_size": 7936 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "pt2", 00:16:43.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 256, 00:16:43.048 "data_size": 7936 00:16:43.048 } 00:16:43.048 ] 00:16:43.048 }' 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.048 23:01:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 [2024-11-26 23:01:22.376038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.308 "name": "raid_bdev1", 00:16:43.308 "aliases": [ 00:16:43.308 "328dd25f-5376-43d7-8a1c-f34529441997" 00:16:43.308 ], 00:16:43.308 "product_name": "Raid Volume", 00:16:43.308 "block_size": 4128, 00:16:43.308 "num_blocks": 7936, 00:16:43.308 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:43.308 "md_size": 32, 00:16:43.308 "md_interleave": true, 00:16:43.308 "dif_type": 0, 00:16:43.308 "assigned_rate_limits": { 00:16:43.308 "rw_ios_per_sec": 0, 00:16:43.308 "rw_mbytes_per_sec": 0, 00:16:43.308 "r_mbytes_per_sec": 0, 00:16:43.308 "w_mbytes_per_sec": 0 00:16:43.308 }, 00:16:43.308 "claimed": false, 00:16:43.308 "zoned": false, 00:16:43.308 "supported_io_types": { 00:16:43.308 "read": true, 00:16:43.308 "write": true, 00:16:43.308 "unmap": false, 00:16:43.308 "flush": false, 00:16:43.308 "reset": true, 00:16:43.308 "nvme_admin": false, 00:16:43.308 "nvme_io": false, 00:16:43.308 "nvme_io_md": false, 00:16:43.308 "write_zeroes": true, 00:16:43.308 "zcopy": false, 00:16:43.308 "get_zone_info": false, 00:16:43.308 "zone_management": false, 00:16:43.308 "zone_append": false, 00:16:43.308 "compare": false, 00:16:43.308 "compare_and_write": false, 00:16:43.308 "abort": false, 00:16:43.308 "seek_hole": false, 00:16:43.308 "seek_data": false, 00:16:43.308 "copy": false, 00:16:43.308 "nvme_iov_md": false 00:16:43.308 }, 00:16:43.308 "memory_domains": [ 00:16:43.308 { 00:16:43.308 "dma_device_id": "system", 00:16:43.308 "dma_device_type": 1 00:16:43.308 }, 00:16:43.308 { 00:16:43.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.308 "dma_device_type": 2 00:16:43.308 }, 00:16:43.308 { 00:16:43.308 "dma_device_id": "system", 00:16:43.308 "dma_device_type": 1 00:16:43.308 }, 00:16:43.308 { 00:16:43.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.308 "dma_device_type": 2 00:16:43.308 } 00:16:43.308 ], 00:16:43.308 "driver_specific": { 00:16:43.308 "raid": { 00:16:43.308 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:43.308 "strip_size_kb": 0, 00:16:43.308 "state": "online", 00:16:43.308 "raid_level": "raid1", 00:16:43.308 "superblock": true, 00:16:43.308 "num_base_bdevs": 2, 00:16:43.308 "num_base_bdevs_discovered": 2, 00:16:43.308 "num_base_bdevs_operational": 2, 00:16:43.308 "base_bdevs_list": [ 00:16:43.308 { 00:16:43.308 "name": "pt1", 00:16:43.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.308 "is_configured": true, 00:16:43.308 "data_offset": 256, 00:16:43.308 "data_size": 7936 00:16:43.308 }, 00:16:43.308 { 00:16:43.308 "name": "pt2", 00:16:43.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.308 "is_configured": true, 00:16:43.308 "data_offset": 256, 00:16:43.308 "data_size": 7936 00:16:43.308 } 00:16:43.308 ] 00:16:43.308 } 00:16:43.308 } 00:16:43.308 }' 00:16:43.308 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:43.568 pt2' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.568 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.569 [2024-11-26 23:01:22.596076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=328dd25f-5376-43d7-8a1c-f34529441997 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 328dd25f-5376-43d7-8a1c-f34529441997 ']' 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.569 [2024-11-26 23:01:22.639814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.569 [2024-11-26 23:01:22.639841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.569 [2024-11-26 23:01:22.639932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.569 [2024-11-26 23:01:22.639997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.569 [2024-11-26 23:01:22.640009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.569 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 [2024-11-26 23:01:22.775857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:43.829 [2024-11-26 23:01:22.777731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:43.829 [2024-11-26 23:01:22.777831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:43.829 [2024-11-26 23:01:22.777911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:43.829 [2024-11-26 23:01:22.777926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.829 [2024-11-26 23:01:22.777935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:43.829 request: 00:16:43.829 { 00:16:43.829 "name": "raid_bdev1", 00:16:43.829 "raid_level": "raid1", 00:16:43.829 "base_bdevs": [ 00:16:43.829 "malloc1", 00:16:43.829 "malloc2" 00:16:43.829 ], 00:16:43.829 "superblock": false, 00:16:43.829 "method": "bdev_raid_create", 00:16:43.829 "req_id": 1 00:16:43.829 } 00:16:43.829 Got JSON-RPC error response 00:16:43.829 response: 00:16:43.829 { 00:16:43.829 "code": -17, 00:16:43.829 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:43.829 } 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 [2024-11-26 23:01:22.831838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.829 [2024-11-26 23:01:22.831923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.829 [2024-11-26 23:01:22.831956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:43.829 [2024-11-26 23:01:22.831984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.829 [2024-11-26 23:01:22.833807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.829 [2024-11-26 23:01:22.833872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.829 [2024-11-26 23:01:22.833948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:43.829 [2024-11-26 23:01:22.834000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.829 pt1 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.829 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.830 "name": "raid_bdev1", 00:16:43.830 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:43.830 "strip_size_kb": 0, 00:16:43.830 "state": "configuring", 00:16:43.830 "raid_level": "raid1", 00:16:43.830 "superblock": true, 00:16:43.830 "num_base_bdevs": 2, 00:16:43.830 "num_base_bdevs_discovered": 1, 00:16:43.830 "num_base_bdevs_operational": 2, 00:16:43.830 "base_bdevs_list": [ 00:16:43.830 { 00:16:43.830 "name": "pt1", 00:16:43.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.830 "is_configured": true, 00:16:43.830 "data_offset": 256, 00:16:43.830 "data_size": 7936 00:16:43.830 }, 00:16:43.830 { 00:16:43.830 "name": null, 00:16:43.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.830 "is_configured": false, 00:16:43.830 "data_offset": 256, 00:16:43.830 "data_size": 7936 00:16:43.830 } 00:16:43.830 ] 00:16:43.830 }' 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.830 23:01:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.411 [2024-11-26 23:01:23.327965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.411 [2024-11-26 23:01:23.328059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.411 [2024-11-26 23:01:23.328104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:44.411 [2024-11-26 23:01:23.328133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.411 [2024-11-26 23:01:23.328266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.411 [2024-11-26 23:01:23.328307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.411 [2024-11-26 23:01:23.328357] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:44.411 [2024-11-26 23:01:23.328397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.411 [2024-11-26 23:01:23.328479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:44.411 [2024-11-26 23:01:23.328518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:44.411 [2024-11-26 23:01:23.328598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:44.411 [2024-11-26 23:01:23.328689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:44.411 [2024-11-26 23:01:23.328720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:44.411 [2024-11-26 23:01:23.328802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.411 pt2 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.411 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.411 "name": "raid_bdev1", 00:16:44.412 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:44.412 "strip_size_kb": 0, 00:16:44.412 "state": "online", 00:16:44.412 "raid_level": "raid1", 00:16:44.412 "superblock": true, 00:16:44.412 "num_base_bdevs": 2, 00:16:44.412 "num_base_bdevs_discovered": 2, 00:16:44.412 "num_base_bdevs_operational": 2, 00:16:44.412 "base_bdevs_list": [ 00:16:44.412 { 00:16:44.412 "name": "pt1", 00:16:44.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.412 "is_configured": true, 00:16:44.412 "data_offset": 256, 00:16:44.412 "data_size": 7936 00:16:44.412 }, 00:16:44.412 { 00:16:44.412 "name": "pt2", 00:16:44.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.412 "is_configured": true, 00:16:44.412 "data_offset": 256, 00:16:44.412 "data_size": 7936 00:16:44.412 } 00:16:44.412 ] 00:16:44.412 }' 00:16:44.412 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.412 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.674 [2024-11-26 23:01:23.756341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.674 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.674 "name": "raid_bdev1", 00:16:44.674 "aliases": [ 00:16:44.674 "328dd25f-5376-43d7-8a1c-f34529441997" 00:16:44.674 ], 00:16:44.674 "product_name": "Raid Volume", 00:16:44.674 "block_size": 4128, 00:16:44.674 "num_blocks": 7936, 00:16:44.674 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:44.674 "md_size": 32, 00:16:44.674 "md_interleave": true, 00:16:44.674 "dif_type": 0, 00:16:44.674 "assigned_rate_limits": { 00:16:44.674 "rw_ios_per_sec": 0, 00:16:44.674 "rw_mbytes_per_sec": 0, 00:16:44.674 "r_mbytes_per_sec": 0, 00:16:44.674 "w_mbytes_per_sec": 0 00:16:44.674 }, 00:16:44.674 "claimed": false, 00:16:44.674 "zoned": false, 00:16:44.674 "supported_io_types": { 00:16:44.674 "read": true, 00:16:44.674 "write": true, 00:16:44.674 "unmap": false, 00:16:44.674 "flush": false, 00:16:44.674 "reset": true, 00:16:44.674 "nvme_admin": false, 00:16:44.674 "nvme_io": false, 00:16:44.674 "nvme_io_md": false, 00:16:44.674 "write_zeroes": true, 00:16:44.674 "zcopy": false, 00:16:44.674 "get_zone_info": false, 00:16:44.674 "zone_management": false, 00:16:44.674 "zone_append": false, 00:16:44.674 "compare": false, 00:16:44.674 "compare_and_write": false, 00:16:44.674 "abort": false, 00:16:44.674 "seek_hole": false, 00:16:44.674 "seek_data": false, 00:16:44.674 "copy": false, 00:16:44.674 "nvme_iov_md": false 00:16:44.674 }, 00:16:44.674 "memory_domains": [ 00:16:44.674 { 00:16:44.674 "dma_device_id": "system", 00:16:44.674 "dma_device_type": 1 00:16:44.674 }, 00:16:44.674 { 00:16:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.674 "dma_device_type": 2 00:16:44.674 }, 00:16:44.674 { 00:16:44.674 "dma_device_id": "system", 00:16:44.674 "dma_device_type": 1 00:16:44.674 }, 00:16:44.674 { 00:16:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.674 "dma_device_type": 2 00:16:44.674 } 00:16:44.674 ], 00:16:44.674 "driver_specific": { 00:16:44.674 "raid": { 00:16:44.674 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:44.674 "strip_size_kb": 0, 00:16:44.674 "state": "online", 00:16:44.674 "raid_level": "raid1", 00:16:44.674 "superblock": true, 00:16:44.674 "num_base_bdevs": 2, 00:16:44.674 "num_base_bdevs_discovered": 2, 00:16:44.674 "num_base_bdevs_operational": 2, 00:16:44.674 "base_bdevs_list": [ 00:16:44.675 { 00:16:44.675 "name": "pt1", 00:16:44.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.675 "is_configured": true, 00:16:44.675 "data_offset": 256, 00:16:44.675 "data_size": 7936 00:16:44.675 }, 00:16:44.675 { 00:16:44.675 "name": "pt2", 00:16:44.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.675 "is_configured": true, 00:16:44.675 "data_offset": 256, 00:16:44.675 "data_size": 7936 00:16:44.675 } 00:16:44.675 ] 00:16:44.675 } 00:16:44.675 } 00:16:44.675 }' 00:16:44.675 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:44.934 pt2' 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.934 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.935 23:01:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.935 [2024-11-26 23:01:24.000418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 328dd25f-5376-43d7-8a1c-f34529441997 '!=' 328dd25f-5376-43d7-8a1c-f34529441997 ']' 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.935 [2024-11-26 23:01:24.044202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.935 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.195 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.195 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.195 "name": "raid_bdev1", 00:16:45.195 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:45.195 "strip_size_kb": 0, 00:16:45.195 "state": "online", 00:16:45.195 "raid_level": "raid1", 00:16:45.195 "superblock": true, 00:16:45.195 "num_base_bdevs": 2, 00:16:45.195 "num_base_bdevs_discovered": 1, 00:16:45.195 "num_base_bdevs_operational": 1, 00:16:45.195 "base_bdevs_list": [ 00:16:45.195 { 00:16:45.195 "name": null, 00:16:45.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.195 "is_configured": false, 00:16:45.195 "data_offset": 0, 00:16:45.195 "data_size": 7936 00:16:45.195 }, 00:16:45.195 { 00:16:45.195 "name": "pt2", 00:16:45.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.195 "is_configured": true, 00:16:45.195 "data_offset": 256, 00:16:45.195 "data_size": 7936 00:16:45.195 } 00:16:45.195 ] 00:16:45.195 }' 00:16:45.195 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.195 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.455 [2024-11-26 23:01:24.532330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.455 [2024-11-26 23:01:24.532391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.455 [2024-11-26 23:01:24.532461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.455 [2024-11-26 23:01:24.532512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.455 [2024-11-26 23:01:24.532544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.455 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.715 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.716 [2024-11-26 23:01:24.604363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.716 [2024-11-26 23:01:24.604419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.716 [2024-11-26 23:01:24.604435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:45.716 [2024-11-26 23:01:24.604446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.716 [2024-11-26 23:01:24.606298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.716 [2024-11-26 23:01:24.606381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.716 [2024-11-26 23:01:24.606427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:45.716 [2024-11-26 23:01:24.606458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.716 [2024-11-26 23:01:24.606512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:45.716 [2024-11-26 23:01:24.606521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:45.716 [2024-11-26 23:01:24.606602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:45.716 [2024-11-26 23:01:24.606660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:45.716 [2024-11-26 23:01:24.606667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:45.716 [2024-11-26 23:01:24.606725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.716 pt2 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.716 "name": "raid_bdev1", 00:16:45.716 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:45.716 "strip_size_kb": 0, 00:16:45.716 "state": "online", 00:16:45.716 "raid_level": "raid1", 00:16:45.716 "superblock": true, 00:16:45.716 "num_base_bdevs": 2, 00:16:45.716 "num_base_bdevs_discovered": 1, 00:16:45.716 "num_base_bdevs_operational": 1, 00:16:45.716 "base_bdevs_list": [ 00:16:45.716 { 00:16:45.716 "name": null, 00:16:45.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.716 "is_configured": false, 00:16:45.716 "data_offset": 256, 00:16:45.716 "data_size": 7936 00:16:45.716 }, 00:16:45.716 { 00:16:45.716 "name": "pt2", 00:16:45.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.716 "is_configured": true, 00:16:45.716 "data_offset": 256, 00:16:45.716 "data_size": 7936 00:16:45.716 } 00:16:45.716 ] 00:16:45.716 }' 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.716 23:01:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 [2024-11-26 23:01:25.016466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.975 [2024-11-26 23:01:25.016532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.975 [2024-11-26 23:01:25.016602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.975 [2024-11-26 23:01:25.016659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.975 [2024-11-26 23:01:25.016690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.976 [2024-11-26 23:01:25.076483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:45.976 [2024-11-26 23:01:25.076563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.976 [2024-11-26 23:01:25.076614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:45.976 [2024-11-26 23:01:25.076640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.976 [2024-11-26 23:01:25.078510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.976 [2024-11-26 23:01:25.078574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:45.976 [2024-11-26 23:01:25.078638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:45.976 [2024-11-26 23:01:25.078680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.976 [2024-11-26 23:01:25.078830] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:45.976 [2024-11-26 23:01:25.078884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.976 [2024-11-26 23:01:25.078954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:45.976 [2024-11-26 23:01:25.079027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.976 [2024-11-26 23:01:25.079122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:45.976 [2024-11-26 23:01:25.079160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:45.976 [2024-11-26 23:01:25.079239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:45.976 [2024-11-26 23:01:25.079337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:45.976 [2024-11-26 23:01:25.079371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:45.976 [2024-11-26 23:01:25.079474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.976 pt1 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.976 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.235 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.235 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.235 "name": "raid_bdev1", 00:16:46.235 "uuid": "328dd25f-5376-43d7-8a1c-f34529441997", 00:16:46.235 "strip_size_kb": 0, 00:16:46.235 "state": "online", 00:16:46.235 "raid_level": "raid1", 00:16:46.235 "superblock": true, 00:16:46.235 "num_base_bdevs": 2, 00:16:46.235 "num_base_bdevs_discovered": 1, 00:16:46.235 "num_base_bdevs_operational": 1, 00:16:46.235 "base_bdevs_list": [ 00:16:46.235 { 00:16:46.235 "name": null, 00:16:46.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.235 "is_configured": false, 00:16:46.235 "data_offset": 256, 00:16:46.235 "data_size": 7936 00:16:46.235 }, 00:16:46.235 { 00:16:46.235 "name": "pt2", 00:16:46.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.235 "is_configured": true, 00:16:46.235 "data_offset": 256, 00:16:46.235 "data_size": 7936 00:16:46.235 } 00:16:46.235 ] 00:16:46.235 }' 00:16:46.235 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.235 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 [2024-11-26 23:01:25.580828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 328dd25f-5376-43d7-8a1c-f34529441997 '!=' 328dd25f-5376-43d7-8a1c-f34529441997 ']' 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100636 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100636 ']' 00:16:46.494 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100636 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100636 00:16:46.754 killing process with pid 100636 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100636' 00:16:46.754 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 100636 00:16:46.754 [2024-11-26 23:01:25.660838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.754 [2024-11-26 23:01:25.660905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.755 [2024-11-26 23:01:25.660940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.755 [2024-11-26 23:01:25.660949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:46.755 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 100636 00:16:46.755 [2024-11-26 23:01:25.684413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.015 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:47.015 00:16:47.015 real 0m4.975s 00:16:47.015 user 0m8.119s 00:16:47.015 sys 0m1.146s 00:16:47.015 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.015 ************************************ 00:16:47.015 END TEST raid_superblock_test_md_interleaved 00:16:47.015 ************************************ 00:16:47.015 23:01:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.015 23:01:25 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:47.015 23:01:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:47.015 23:01:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.015 23:01:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.015 ************************************ 00:16:47.015 START TEST raid_rebuild_test_sb_md_interleaved 00:16:47.015 ************************************ 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:47.015 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:47.016 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:47.016 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:47.016 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=100955 00:16:47.016 23:01:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 100955 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100955 ']' 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.016 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.016 [2024-11-26 23:01:26.090595] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:47.016 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:47.016 Zero copy mechanism will not be used. 00:16:47.016 [2024-11-26 23:01:26.090781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100955 ] 00:16:47.276 [2024-11-26 23:01:26.229093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:47.276 [2024-11-26 23:01:26.268546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.276 [2024-11-26 23:01:26.294994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.276 [2024-11-26 23:01:26.338717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.276 [2024-11-26 23:01:26.338756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.846 BaseBdev1_malloc 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.846 [2024-11-26 23:01:26.927613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.846 [2024-11-26 23:01:26.927698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.846 [2024-11-26 23:01:26.927717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.846 [2024-11-26 23:01:26.927733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.846 [2024-11-26 23:01:26.929731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.846 [2024-11-26 23:01:26.929769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.846 BaseBdev1 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.846 BaseBdev2_malloc 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.846 [2024-11-26 23:01:26.956537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:47.846 [2024-11-26 23:01:26.956595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.846 [2024-11-26 23:01:26.956613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.846 [2024-11-26 23:01:26.956624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.846 [2024-11-26 23:01:26.958493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.846 [2024-11-26 23:01:26.958595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.846 BaseBdev2 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.846 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.107 spare_malloc 00:16:48.107 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.107 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:48.107 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.107 23:01:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.107 spare_delay 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.107 [2024-11-26 23:01:27.011822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.107 [2024-11-26 23:01:27.011905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.107 [2024-11-26 23:01:27.011943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:48.107 [2024-11-26 23:01:27.011960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.107 [2024-11-26 23:01:27.014755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.107 [2024-11-26 23:01:27.014809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.107 spare 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:48.107 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.108 [2024-11-26 23:01:27.023862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.108 [2024-11-26 23:01:27.025802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.108 [2024-11-26 23:01:27.026025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:48.108 [2024-11-26 23:01:27.026051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:48.108 [2024-11-26 23:01:27.026143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:48.108 [2024-11-26 23:01:27.026214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:48.108 [2024-11-26 23:01:27.026222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:48.108 [2024-11-26 23:01:27.026309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.108 "name": "raid_bdev1", 00:16:48.108 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:48.108 "strip_size_kb": 0, 00:16:48.108 "state": "online", 00:16:48.108 "raid_level": "raid1", 00:16:48.108 "superblock": true, 00:16:48.108 "num_base_bdevs": 2, 00:16:48.108 "num_base_bdevs_discovered": 2, 00:16:48.108 "num_base_bdevs_operational": 2, 00:16:48.108 "base_bdevs_list": [ 00:16:48.108 { 00:16:48.108 "name": "BaseBdev1", 00:16:48.108 "uuid": "e01b065e-a9b0-5087-8a81-94d8ab79b99d", 00:16:48.108 "is_configured": true, 00:16:48.108 "data_offset": 256, 00:16:48.108 "data_size": 7936 00:16:48.108 }, 00:16:48.108 { 00:16:48.108 "name": "BaseBdev2", 00:16:48.108 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:48.108 "is_configured": true, 00:16:48.108 "data_offset": 256, 00:16:48.108 "data_size": 7936 00:16:48.108 } 00:16:48.108 ] 00:16:48.108 }' 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.108 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.367 [2024-11-26 23:01:27.440200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.367 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.627 [2024-11-26 23:01:27.503926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.627 "name": "raid_bdev1", 00:16:48.627 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:48.627 "strip_size_kb": 0, 00:16:48.627 "state": "online", 00:16:48.627 "raid_level": "raid1", 00:16:48.627 "superblock": true, 00:16:48.627 "num_base_bdevs": 2, 00:16:48.627 "num_base_bdevs_discovered": 1, 00:16:48.627 "num_base_bdevs_operational": 1, 00:16:48.627 "base_bdevs_list": [ 00:16:48.627 { 00:16:48.627 "name": null, 00:16:48.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.627 "is_configured": false, 00:16:48.627 "data_offset": 0, 00:16:48.627 "data_size": 7936 00:16:48.627 }, 00:16:48.627 { 00:16:48.627 "name": "BaseBdev2", 00:16:48.627 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:48.627 "is_configured": true, 00:16:48.627 "data_offset": 256, 00:16:48.627 "data_size": 7936 00:16:48.627 } 00:16:48.627 ] 00:16:48.627 }' 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.627 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.887 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.887 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.887 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.887 [2024-11-26 23:01:27.960086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.887 [2024-11-26 23:01:27.963836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:48.887 [2024-11-26 23:01:27.965628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.887 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.887 23:01:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.268 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.269 23:01:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.269 "name": "raid_bdev1", 00:16:50.269 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:50.269 "strip_size_kb": 0, 00:16:50.269 "state": "online", 00:16:50.269 "raid_level": "raid1", 00:16:50.269 "superblock": true, 00:16:50.269 "num_base_bdevs": 2, 00:16:50.269 "num_base_bdevs_discovered": 2, 00:16:50.269 "num_base_bdevs_operational": 2, 00:16:50.269 "process": { 00:16:50.269 "type": "rebuild", 00:16:50.269 "target": "spare", 00:16:50.269 "progress": { 00:16:50.269 "blocks": 2560, 00:16:50.269 "percent": 32 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 "base_bdevs_list": [ 00:16:50.269 { 00:16:50.269 "name": "spare", 00:16:50.269 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:50.269 "is_configured": true, 00:16:50.269 "data_offset": 256, 00:16:50.269 "data_size": 7936 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "name": "BaseBdev2", 00:16:50.269 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:50.269 "is_configured": true, 00:16:50.269 "data_offset": 256, 00:16:50.269 "data_size": 7936 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }' 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.269 [2024-11-26 23:01:29.122581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.269 [2024-11-26 23:01:29.172626] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:50.269 [2024-11-26 23:01:29.172681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.269 [2024-11-26 23:01:29.172695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.269 [2024-11-26 23:01:29.172709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.269 "name": "raid_bdev1", 00:16:50.269 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:50.269 "strip_size_kb": 0, 00:16:50.269 "state": "online", 00:16:50.269 "raid_level": "raid1", 00:16:50.269 "superblock": true, 00:16:50.269 "num_base_bdevs": 2, 00:16:50.269 "num_base_bdevs_discovered": 1, 00:16:50.269 "num_base_bdevs_operational": 1, 00:16:50.269 "base_bdevs_list": [ 00:16:50.269 { 00:16:50.269 "name": null, 00:16:50.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.269 "is_configured": false, 00:16:50.269 "data_offset": 0, 00:16:50.269 "data_size": 7936 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "name": "BaseBdev2", 00:16:50.269 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:50.269 "is_configured": true, 00:16:50.269 "data_offset": 256, 00:16:50.269 "data_size": 7936 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }' 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.269 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.528 "name": "raid_bdev1", 00:16:50.528 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:50.528 "strip_size_kb": 0, 00:16:50.528 "state": "online", 00:16:50.528 "raid_level": "raid1", 00:16:50.528 "superblock": true, 00:16:50.528 "num_base_bdevs": 2, 00:16:50.528 "num_base_bdevs_discovered": 1, 00:16:50.528 "num_base_bdevs_operational": 1, 00:16:50.528 "base_bdevs_list": [ 00:16:50.528 { 00:16:50.528 "name": null, 00:16:50.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.528 "is_configured": false, 00:16:50.528 "data_offset": 0, 00:16:50.528 "data_size": 7936 00:16:50.528 }, 00:16:50.528 { 00:16:50.528 "name": "BaseBdev2", 00:16:50.528 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:50.528 "is_configured": true, 00:16:50.528 "data_offset": 256, 00:16:50.528 "data_size": 7936 00:16:50.528 } 00:16:50.528 ] 00:16:50.528 }' 00:16:50.528 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 [2024-11-26 23:01:29.724872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.788 [2024-11-26 23:01:29.728564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:50.788 [2024-11-26 23:01:29.730312] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 23:01:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.730 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.730 "name": "raid_bdev1", 00:16:51.730 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:51.730 "strip_size_kb": 0, 00:16:51.730 "state": "online", 00:16:51.730 "raid_level": "raid1", 00:16:51.730 "superblock": true, 00:16:51.730 "num_base_bdevs": 2, 00:16:51.730 "num_base_bdevs_discovered": 2, 00:16:51.730 "num_base_bdevs_operational": 2, 00:16:51.730 "process": { 00:16:51.730 "type": "rebuild", 00:16:51.730 "target": "spare", 00:16:51.731 "progress": { 00:16:51.731 "blocks": 2560, 00:16:51.731 "percent": 32 00:16:51.731 } 00:16:51.731 }, 00:16:51.731 "base_bdevs_list": [ 00:16:51.731 { 00:16:51.731 "name": "spare", 00:16:51.731 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:51.731 "is_configured": true, 00:16:51.731 "data_offset": 256, 00:16:51.731 "data_size": 7936 00:16:51.731 }, 00:16:51.731 { 00:16:51.731 "name": "BaseBdev2", 00:16:51.731 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:51.731 "is_configured": true, 00:16:51.731 "data_offset": 256, 00:16:51.731 "data_size": 7936 00:16:51.731 } 00:16:51.731 ] 00:16:51.731 }' 00:16:51.731 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.731 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.731 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:51.991 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=619 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.991 "name": "raid_bdev1", 00:16:51.991 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:51.991 "strip_size_kb": 0, 00:16:51.991 "state": "online", 00:16:51.991 "raid_level": "raid1", 00:16:51.991 "superblock": true, 00:16:51.991 "num_base_bdevs": 2, 00:16:51.991 "num_base_bdevs_discovered": 2, 00:16:51.991 "num_base_bdevs_operational": 2, 00:16:51.991 "process": { 00:16:51.991 "type": "rebuild", 00:16:51.991 "target": "spare", 00:16:51.991 "progress": { 00:16:51.991 "blocks": 2816, 00:16:51.991 "percent": 35 00:16:51.991 } 00:16:51.991 }, 00:16:51.991 "base_bdevs_list": [ 00:16:51.991 { 00:16:51.991 "name": "spare", 00:16:51.991 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:51.991 "is_configured": true, 00:16:51.991 "data_offset": 256, 00:16:51.991 "data_size": 7936 00:16:51.991 }, 00:16:51.991 { 00:16:51.991 "name": "BaseBdev2", 00:16:51.991 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:51.991 "is_configured": true, 00:16:51.991 "data_offset": 256, 00:16:51.991 "data_size": 7936 00:16:51.991 } 00:16:51.991 ] 00:16:51.991 }' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.991 23:01:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.931 23:01:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.931 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.932 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.932 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.932 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.932 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.932 "name": "raid_bdev1", 00:16:52.932 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:52.932 "strip_size_kb": 0, 00:16:52.932 "state": "online", 00:16:52.932 "raid_level": "raid1", 00:16:52.932 "superblock": true, 00:16:52.932 "num_base_bdevs": 2, 00:16:52.932 "num_base_bdevs_discovered": 2, 00:16:52.932 "num_base_bdevs_operational": 2, 00:16:52.932 "process": { 00:16:52.932 "type": "rebuild", 00:16:52.932 "target": "spare", 00:16:52.932 "progress": { 00:16:52.932 "blocks": 5632, 00:16:52.932 "percent": 70 00:16:52.932 } 00:16:52.932 }, 00:16:52.932 "base_bdevs_list": [ 00:16:52.932 { 00:16:52.932 "name": "spare", 00:16:52.932 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:52.932 "is_configured": true, 00:16:52.932 "data_offset": 256, 00:16:52.932 "data_size": 7936 00:16:52.932 }, 00:16:52.932 { 00:16:52.932 "name": "BaseBdev2", 00:16:52.932 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:52.932 "is_configured": true, 00:16:52.932 "data_offset": 256, 00:16:52.932 "data_size": 7936 00:16:52.932 } 00:16:52.932 ] 00:16:52.932 }' 00:16:52.932 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.191 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.191 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.191 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.191 23:01:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.761 [2024-11-26 23:01:32.846629] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:53.761 [2024-11-26 23:01:32.846778] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:53.761 [2024-11-26 23:01:32.846924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.333 "name": "raid_bdev1", 00:16:54.333 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:54.333 "strip_size_kb": 0, 00:16:54.333 "state": "online", 00:16:54.333 "raid_level": "raid1", 00:16:54.333 "superblock": true, 00:16:54.333 "num_base_bdevs": 2, 00:16:54.333 "num_base_bdevs_discovered": 2, 00:16:54.333 "num_base_bdevs_operational": 2, 00:16:54.333 "base_bdevs_list": [ 00:16:54.333 { 00:16:54.333 "name": "spare", 00:16:54.333 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 }, 00:16:54.333 { 00:16:54.333 "name": "BaseBdev2", 00:16:54.333 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 } 00:16:54.333 ] 00:16:54.333 }' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.333 "name": "raid_bdev1", 00:16:54.333 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:54.333 "strip_size_kb": 0, 00:16:54.333 "state": "online", 00:16:54.333 "raid_level": "raid1", 00:16:54.333 "superblock": true, 00:16:54.333 "num_base_bdevs": 2, 00:16:54.333 "num_base_bdevs_discovered": 2, 00:16:54.333 "num_base_bdevs_operational": 2, 00:16:54.333 "base_bdevs_list": [ 00:16:54.333 { 00:16:54.333 "name": "spare", 00:16:54.333 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 }, 00:16:54.333 { 00:16:54.333 "name": "BaseBdev2", 00:16:54.333 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 } 00:16:54.333 ] 00:16:54.333 }' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.333 "name": "raid_bdev1", 00:16:54.333 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:54.333 "strip_size_kb": 0, 00:16:54.333 "state": "online", 00:16:54.333 "raid_level": "raid1", 00:16:54.333 "superblock": true, 00:16:54.333 "num_base_bdevs": 2, 00:16:54.333 "num_base_bdevs_discovered": 2, 00:16:54.333 "num_base_bdevs_operational": 2, 00:16:54.333 "base_bdevs_list": [ 00:16:54.333 { 00:16:54.333 "name": "spare", 00:16:54.333 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 }, 00:16:54.333 { 00:16:54.333 "name": "BaseBdev2", 00:16:54.333 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:54.333 "is_configured": true, 00:16:54.333 "data_offset": 256, 00:16:54.333 "data_size": 7936 00:16:54.333 } 00:16:54.333 ] 00:16:54.333 }' 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.333 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.902 [2024-11-26 23:01:33.847408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.902 [2024-11-26 23:01:33.847489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.902 [2024-11-26 23:01:33.847611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.902 [2024-11-26 23:01:33.847700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.902 [2024-11-26 23:01:33.847784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.902 [2024-11-26 23:01:33.919421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.902 [2024-11-26 23:01:33.919521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.902 [2024-11-26 23:01:33.919545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:54.902 [2024-11-26 23:01:33.919554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.902 [2024-11-26 23:01:33.921526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.902 [2024-11-26 23:01:33.921562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.902 [2024-11-26 23:01:33.921613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:54.902 [2024-11-26 23:01:33.921658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.902 [2024-11-26 23:01:33.921762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.902 spare 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:54.902 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.903 23:01:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.903 [2024-11-26 23:01:34.021811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:54.903 [2024-11-26 23:01:34.021840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:54.903 [2024-11-26 23:01:34.021921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:16:54.903 [2024-11-26 23:01:34.021998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:54.903 [2024-11-26 23:01:34.022011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:54.903 [2024-11-26 23:01:34.022080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.903 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.166 "name": "raid_bdev1", 00:16:55.166 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:55.166 "strip_size_kb": 0, 00:16:55.166 "state": "online", 00:16:55.166 "raid_level": "raid1", 00:16:55.166 "superblock": true, 00:16:55.166 "num_base_bdevs": 2, 00:16:55.166 "num_base_bdevs_discovered": 2, 00:16:55.166 "num_base_bdevs_operational": 2, 00:16:55.166 "base_bdevs_list": [ 00:16:55.166 { 00:16:55.166 "name": "spare", 00:16:55.166 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:55.166 "is_configured": true, 00:16:55.166 "data_offset": 256, 00:16:55.166 "data_size": 7936 00:16:55.166 }, 00:16:55.166 { 00:16:55.166 "name": "BaseBdev2", 00:16:55.166 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:55.166 "is_configured": true, 00:16:55.166 "data_offset": 256, 00:16:55.166 "data_size": 7936 00:16:55.166 } 00:16:55.166 ] 00:16:55.166 }' 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.166 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.429 "name": "raid_bdev1", 00:16:55.429 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:55.429 "strip_size_kb": 0, 00:16:55.429 "state": "online", 00:16:55.429 "raid_level": "raid1", 00:16:55.429 "superblock": true, 00:16:55.429 "num_base_bdevs": 2, 00:16:55.429 "num_base_bdevs_discovered": 2, 00:16:55.429 "num_base_bdevs_operational": 2, 00:16:55.429 "base_bdevs_list": [ 00:16:55.429 { 00:16:55.429 "name": "spare", 00:16:55.429 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:55.429 "is_configured": true, 00:16:55.429 "data_offset": 256, 00:16:55.429 "data_size": 7936 00:16:55.429 }, 00:16:55.429 { 00:16:55.429 "name": "BaseBdev2", 00:16:55.429 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:55.429 "is_configured": true, 00:16:55.429 "data_offset": 256, 00:16:55.429 "data_size": 7936 00:16:55.429 } 00:16:55.429 ] 00:16:55.429 }' 00:16:55.429 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.688 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.689 [2024-11-26 23:01:34.663667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.689 "name": "raid_bdev1", 00:16:55.689 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:55.689 "strip_size_kb": 0, 00:16:55.689 "state": "online", 00:16:55.689 "raid_level": "raid1", 00:16:55.689 "superblock": true, 00:16:55.689 "num_base_bdevs": 2, 00:16:55.689 "num_base_bdevs_discovered": 1, 00:16:55.689 "num_base_bdevs_operational": 1, 00:16:55.689 "base_bdevs_list": [ 00:16:55.689 { 00:16:55.689 "name": null, 00:16:55.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.689 "is_configured": false, 00:16:55.689 "data_offset": 0, 00:16:55.689 "data_size": 7936 00:16:55.689 }, 00:16:55.689 { 00:16:55.689 "name": "BaseBdev2", 00:16:55.689 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:55.689 "is_configured": true, 00:16:55.689 "data_offset": 256, 00:16:55.689 "data_size": 7936 00:16:55.689 } 00:16:55.689 ] 00:16:55.689 }' 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.689 23:01:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.948 23:01:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.948 23:01:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.949 23:01:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.949 [2024-11-26 23:01:35.059773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.949 [2024-11-26 23:01:35.059991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.949 [2024-11-26 23:01:35.060072] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:55.949 [2024-11-26 23:01:35.060135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.949 [2024-11-26 23:01:35.063837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:55.949 23:01:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.949 23:01:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:55.949 [2024-11-26 23:01:35.065679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.334 "name": "raid_bdev1", 00:16:57.334 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:57.334 "strip_size_kb": 0, 00:16:57.334 "state": "online", 00:16:57.334 "raid_level": "raid1", 00:16:57.334 "superblock": true, 00:16:57.334 "num_base_bdevs": 2, 00:16:57.334 "num_base_bdevs_discovered": 2, 00:16:57.334 "num_base_bdevs_operational": 2, 00:16:57.334 "process": { 00:16:57.334 "type": "rebuild", 00:16:57.334 "target": "spare", 00:16:57.334 "progress": { 00:16:57.334 "blocks": 2560, 00:16:57.334 "percent": 32 00:16:57.334 } 00:16:57.334 }, 00:16:57.334 "base_bdevs_list": [ 00:16:57.334 { 00:16:57.334 "name": "spare", 00:16:57.334 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:57.334 "is_configured": true, 00:16:57.334 "data_offset": 256, 00:16:57.334 "data_size": 7936 00:16:57.334 }, 00:16:57.334 { 00:16:57.334 "name": "BaseBdev2", 00:16:57.334 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:57.334 "is_configured": true, 00:16:57.334 "data_offset": 256, 00:16:57.334 "data_size": 7936 00:16:57.334 } 00:16:57.334 ] 00:16:57.334 }' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.334 [2024-11-26 23:01:36.197690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.334 [2024-11-26 23:01:36.271823] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.334 [2024-11-26 23:01:36.271885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.334 [2024-11-26 23:01:36.271899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.334 [2024-11-26 23:01:36.271909] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.334 "name": "raid_bdev1", 00:16:57.334 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:57.334 "strip_size_kb": 0, 00:16:57.334 "state": "online", 00:16:57.334 "raid_level": "raid1", 00:16:57.334 "superblock": true, 00:16:57.334 "num_base_bdevs": 2, 00:16:57.334 "num_base_bdevs_discovered": 1, 00:16:57.334 "num_base_bdevs_operational": 1, 00:16:57.334 "base_bdevs_list": [ 00:16:57.334 { 00:16:57.334 "name": null, 00:16:57.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.334 "is_configured": false, 00:16:57.334 "data_offset": 0, 00:16:57.334 "data_size": 7936 00:16:57.334 }, 00:16:57.334 { 00:16:57.334 "name": "BaseBdev2", 00:16:57.334 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:57.334 "is_configured": true, 00:16:57.334 "data_offset": 256, 00:16:57.334 "data_size": 7936 00:16:57.334 } 00:16:57.334 ] 00:16:57.334 }' 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.334 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.904 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.904 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.904 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.904 [2024-11-26 23:01:36.747899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.904 [2024-11-26 23:01:36.747966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.904 [2024-11-26 23:01:36.747987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:57.904 [2024-11-26 23:01:36.747997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.904 [2024-11-26 23:01:36.748173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.904 [2024-11-26 23:01:36.748186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.904 [2024-11-26 23:01:36.748234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:57.904 [2024-11-26 23:01:36.748245] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.904 [2024-11-26 23:01:36.748255] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:57.904 [2024-11-26 23:01:36.748309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.904 [2024-11-26 23:01:36.751349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:16:57.904 [2024-11-26 23:01:36.753137] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.904 spare 00:16:57.904 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.904 23:01:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.843 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.843 "name": "raid_bdev1", 00:16:58.843 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:58.843 "strip_size_kb": 0, 00:16:58.843 "state": "online", 00:16:58.843 "raid_level": "raid1", 00:16:58.843 "superblock": true, 00:16:58.843 "num_base_bdevs": 2, 00:16:58.843 "num_base_bdevs_discovered": 2, 00:16:58.843 "num_base_bdevs_operational": 2, 00:16:58.843 "process": { 00:16:58.843 "type": "rebuild", 00:16:58.843 "target": "spare", 00:16:58.843 "progress": { 00:16:58.843 "blocks": 2560, 00:16:58.843 "percent": 32 00:16:58.843 } 00:16:58.843 }, 00:16:58.843 "base_bdevs_list": [ 00:16:58.843 { 00:16:58.843 "name": "spare", 00:16:58.844 "uuid": "891523d7-03f0-5edf-bc3c-311ebbeb3b7d", 00:16:58.844 "is_configured": true, 00:16:58.844 "data_offset": 256, 00:16:58.844 "data_size": 7936 00:16:58.844 }, 00:16:58.844 { 00:16:58.844 "name": "BaseBdev2", 00:16:58.844 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:58.844 "is_configured": true, 00:16:58.844 "data_offset": 256, 00:16:58.844 "data_size": 7936 00:16:58.844 } 00:16:58.844 ] 00:16:58.844 }' 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.844 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.844 [2024-11-26 23:01:37.898460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.844 [2024-11-26 23:01:37.959300] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:58.844 [2024-11-26 23:01:37.959352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.844 [2024-11-26 23:01:37.959386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.844 [2024-11-26 23:01:37.959393] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.104 "name": "raid_bdev1", 00:16:59.104 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:59.104 "strip_size_kb": 0, 00:16:59.104 "state": "online", 00:16:59.104 "raid_level": "raid1", 00:16:59.104 "superblock": true, 00:16:59.104 "num_base_bdevs": 2, 00:16:59.104 "num_base_bdevs_discovered": 1, 00:16:59.104 "num_base_bdevs_operational": 1, 00:16:59.104 "base_bdevs_list": [ 00:16:59.104 { 00:16:59.104 "name": null, 00:16:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.104 "is_configured": false, 00:16:59.104 "data_offset": 0, 00:16:59.104 "data_size": 7936 00:16:59.104 }, 00:16:59.104 { 00:16:59.104 "name": "BaseBdev2", 00:16:59.104 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:59.104 "is_configured": true, 00:16:59.104 "data_offset": 256, 00:16:59.104 "data_size": 7936 00:16:59.104 } 00:16:59.104 ] 00:16:59.104 }' 00:16:59.104 23:01:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.104 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.363 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.363 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.363 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.363 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.364 "name": "raid_bdev1", 00:16:59.364 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:16:59.364 "strip_size_kb": 0, 00:16:59.364 "state": "online", 00:16:59.364 "raid_level": "raid1", 00:16:59.364 "superblock": true, 00:16:59.364 "num_base_bdevs": 2, 00:16:59.364 "num_base_bdevs_discovered": 1, 00:16:59.364 "num_base_bdevs_operational": 1, 00:16:59.364 "base_bdevs_list": [ 00:16:59.364 { 00:16:59.364 "name": null, 00:16:59.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.364 "is_configured": false, 00:16:59.364 "data_offset": 0, 00:16:59.364 "data_size": 7936 00:16:59.364 }, 00:16:59.364 { 00:16:59.364 "name": "BaseBdev2", 00:16:59.364 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:16:59.364 "is_configured": true, 00:16:59.364 "data_offset": 256, 00:16:59.364 "data_size": 7936 00:16:59.364 } 00:16:59.364 ] 00:16:59.364 }' 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.364 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.623 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 [2024-11-26 23:01:38.519133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.624 [2024-11-26 23:01:38.519258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.624 [2024-11-26 23:01:38.519283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:59.624 [2024-11-26 23:01:38.519293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.624 [2024-11-26 23:01:38.519460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.624 [2024-11-26 23:01:38.519474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.624 [2024-11-26 23:01:38.519518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:59.624 [2024-11-26 23:01:38.519537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:59.624 [2024-11-26 23:01:38.519551] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:59.624 [2024-11-26 23:01:38.519560] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:59.624 BaseBdev1 00:16:59.624 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.624 23:01:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.563 "name": "raid_bdev1", 00:17:00.563 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:17:00.563 "strip_size_kb": 0, 00:17:00.563 "state": "online", 00:17:00.563 "raid_level": "raid1", 00:17:00.563 "superblock": true, 00:17:00.563 "num_base_bdevs": 2, 00:17:00.563 "num_base_bdevs_discovered": 1, 00:17:00.563 "num_base_bdevs_operational": 1, 00:17:00.563 "base_bdevs_list": [ 00:17:00.563 { 00:17:00.563 "name": null, 00:17:00.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.563 "is_configured": false, 00:17:00.563 "data_offset": 0, 00:17:00.563 "data_size": 7936 00:17:00.563 }, 00:17:00.563 { 00:17:00.563 "name": "BaseBdev2", 00:17:00.563 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:17:00.563 "is_configured": true, 00:17:00.563 "data_offset": 256, 00:17:00.563 "data_size": 7936 00:17:00.563 } 00:17:00.563 ] 00:17:00.563 }' 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.563 23:01:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.172 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.172 "name": "raid_bdev1", 00:17:01.172 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:17:01.172 "strip_size_kb": 0, 00:17:01.172 "state": "online", 00:17:01.172 "raid_level": "raid1", 00:17:01.172 "superblock": true, 00:17:01.172 "num_base_bdevs": 2, 00:17:01.172 "num_base_bdevs_discovered": 1, 00:17:01.172 "num_base_bdevs_operational": 1, 00:17:01.172 "base_bdevs_list": [ 00:17:01.172 { 00:17:01.172 "name": null, 00:17:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.172 "is_configured": false, 00:17:01.172 "data_offset": 0, 00:17:01.172 "data_size": 7936 00:17:01.172 }, 00:17:01.172 { 00:17:01.172 "name": "BaseBdev2", 00:17:01.172 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:17:01.173 "is_configured": true, 00:17:01.173 "data_offset": 256, 00:17:01.173 "data_size": 7936 00:17:01.173 } 00:17:01.173 ] 00:17:01.173 }' 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.173 [2024-11-26 23:01:40.167571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.173 [2024-11-26 23:01:40.167782] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:01.173 [2024-11-26 23:01:40.167841] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:01.173 request: 00:17:01.173 { 00:17:01.173 "base_bdev": "BaseBdev1", 00:17:01.173 "raid_bdev": "raid_bdev1", 00:17:01.173 "method": "bdev_raid_add_base_bdev", 00:17:01.173 "req_id": 1 00:17:01.173 } 00:17:01.173 Got JSON-RPC error response 00:17:01.173 response: 00:17:01.173 { 00:17:01.173 "code": -22, 00:17:01.173 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:01.173 } 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:01.173 23:01:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.114 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.114 "name": "raid_bdev1", 00:17:02.114 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:17:02.114 "strip_size_kb": 0, 00:17:02.114 "state": "online", 00:17:02.114 "raid_level": "raid1", 00:17:02.114 "superblock": true, 00:17:02.114 "num_base_bdevs": 2, 00:17:02.114 "num_base_bdevs_discovered": 1, 00:17:02.114 "num_base_bdevs_operational": 1, 00:17:02.114 "base_bdevs_list": [ 00:17:02.114 { 00:17:02.114 "name": null, 00:17:02.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.114 "is_configured": false, 00:17:02.114 "data_offset": 0, 00:17:02.114 "data_size": 7936 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev2", 00:17:02.114 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 256, 00:17:02.115 "data_size": 7936 00:17:02.115 } 00:17:02.115 ] 00:17:02.115 }' 00:17:02.115 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.115 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.683 "name": "raid_bdev1", 00:17:02.683 "uuid": "d84837cf-5a52-4372-b816-d20f52a66aa9", 00:17:02.683 "strip_size_kb": 0, 00:17:02.683 "state": "online", 00:17:02.683 "raid_level": "raid1", 00:17:02.683 "superblock": true, 00:17:02.683 "num_base_bdevs": 2, 00:17:02.683 "num_base_bdevs_discovered": 1, 00:17:02.683 "num_base_bdevs_operational": 1, 00:17:02.683 "base_bdevs_list": [ 00:17:02.683 { 00:17:02.683 "name": null, 00:17:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.683 "is_configured": false, 00:17:02.683 "data_offset": 0, 00:17:02.683 "data_size": 7936 00:17:02.683 }, 00:17:02.683 { 00:17:02.683 "name": "BaseBdev2", 00:17:02.683 "uuid": "0a99ec71-2a84-5e6b-93bd-a8ffafcb4b57", 00:17:02.683 "is_configured": true, 00:17:02.683 "data_offset": 256, 00:17:02.683 "data_size": 7936 00:17:02.683 } 00:17:02.683 ] 00:17:02.683 }' 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.683 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 100955 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100955 ']' 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100955 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100955 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.943 killing process with pid 100955 00:17:02.943 Received shutdown signal, test time was about 60.000000 seconds 00:17:02.943 00:17:02.943 Latency(us) 00:17:02.943 [2024-11-26T23:01:42.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.943 [2024-11-26T23:01:42.071Z] =================================================================================================================== 00:17:02.943 [2024-11-26T23:01:42.071Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100955' 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100955 00:17:02.943 [2024-11-26 23:01:41.850385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.943 [2024-11-26 23:01:41.850503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.943 [2024-11-26 23:01:41.850546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.943 [2024-11-26 23:01:41.850557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.943 23:01:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100955 00:17:02.943 [2024-11-26 23:01:41.883240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.203 23:01:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:03.203 00:17:03.203 real 0m16.097s 00:17:03.203 user 0m21.448s 00:17:03.203 sys 0m1.689s 00:17:03.203 23:01:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.203 23:01:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.203 ************************************ 00:17:03.203 END TEST raid_rebuild_test_sb_md_interleaved 00:17:03.203 ************************************ 00:17:03.203 23:01:42 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:03.203 23:01:42 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:03.203 23:01:42 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 100955 ']' 00:17:03.203 23:01:42 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 100955 00:17:03.203 23:01:42 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:03.203 00:17:03.203 real 10m0.599s 00:17:03.203 user 14m6.553s 00:17:03.203 sys 1m54.725s 00:17:03.203 ************************************ 00:17:03.203 END TEST bdev_raid 00:17:03.203 ************************************ 00:17:03.203 23:01:42 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.203 23:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.203 23:01:42 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:03.203 23:01:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.203 23:01:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.203 23:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.203 ************************************ 00:17:03.203 START TEST spdkcli_raid 00:17:03.203 ************************************ 00:17:03.203 23:01:42 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:03.463 * Looking for test storage... 00:17:03.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.463 23:01:42 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.463 --rc genhtml_legend=1 00:17:03.463 --rc geninfo_all_blocks=1 00:17:03.463 --rc geninfo_unexecuted_blocks=1 00:17:03.463 00:17:03.463 ' 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.463 --rc genhtml_legend=1 00:17:03.463 --rc geninfo_all_blocks=1 00:17:03.463 --rc geninfo_unexecuted_blocks=1 00:17:03.463 00:17:03.463 ' 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.463 --rc genhtml_legend=1 00:17:03.463 --rc geninfo_all_blocks=1 00:17:03.463 --rc geninfo_unexecuted_blocks=1 00:17:03.463 00:17:03.463 ' 00:17:03.463 23:01:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.463 --rc genhtml_branch_coverage=1 00:17:03.463 --rc genhtml_function_coverage=1 00:17:03.464 --rc genhtml_legend=1 00:17:03.464 --rc geninfo_all_blocks=1 00:17:03.464 --rc geninfo_unexecuted_blocks=1 00:17:03.464 00:17:03.464 ' 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:03.464 23:01:42 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101618 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:03.464 23:01:42 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101618 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 101618 ']' 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.464 23:01:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.723 [2024-11-26 23:01:42.643138] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:03.723 [2024-11-26 23:01:42.643388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101618 ] 00:17:03.723 [2024-11-26 23:01:42.785406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.723 [2024-11-26 23:01:42.824096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:03.983 [2024-11-26 23:01:42.856175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.983 [2024-11-26 23:01:42.856337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:04.553 23:01:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.553 23:01:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.553 23:01:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.553 23:01:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:04.553 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:04.553 ' 00:17:05.957 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:05.958 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:06.220 23:01:45 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:06.221 23:01:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.221 23:01:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.221 23:01:45 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:06.221 23:01:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.221 23:01:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.221 23:01:45 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:06.221 ' 00:17:07.161 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:07.421 23:01:46 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:07.421 23:01:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.421 23:01:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.421 23:01:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:07.421 23:01:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.421 23:01:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.421 23:01:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:07.421 23:01:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:07.990 23:01:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:07.990 23:01:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:07.990 23:01:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:07.990 23:01:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.990 23:01:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.990 23:01:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:07.990 23:01:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.990 23:01:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.990 23:01:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:07.990 ' 00:17:08.927 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:08.927 23:01:48 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:08.927 23:01:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.927 23:01:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.187 23:01:48 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:09.187 23:01:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.187 23:01:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.187 23:01:48 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:09.187 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:09.187 ' 00:17:10.569 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:10.569 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:10.569 23:01:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.569 23:01:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101618 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101618 ']' 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101618 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101618 00:17:10.569 killing process with pid 101618 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101618' 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 101618 00:17:10.569 23:01:49 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 101618 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101618 ']' 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101618 00:17:11.139 23:01:50 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101618 ']' 00:17:11.139 23:01:50 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101618 00:17:11.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101618) - No such process 00:17:11.139 Process with pid 101618 is not found 00:17:11.139 23:01:50 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 101618 is not found' 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:11.139 23:01:50 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:11.139 00:17:11.139 real 0m7.762s 00:17:11.139 user 0m16.302s 00:17:11.139 sys 0m1.167s 00:17:11.139 23:01:50 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.139 23:01:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.139 ************************************ 00:17:11.139 END TEST spdkcli_raid 00:17:11.139 ************************************ 00:17:11.139 23:01:50 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:11.139 23:01:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.139 23:01:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.139 23:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.139 ************************************ 00:17:11.139 START TEST blockdev_raid5f 00:17:11.139 ************************************ 00:17:11.139 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:11.139 * Looking for test storage... 00:17:11.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:11.139 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:11.139 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:17:11.139 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:11.400 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:11.400 23:01:50 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.401 23:01:50 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.401 23:01:50 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.401 23:01:50 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:11.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.401 --rc genhtml_branch_coverage=1 00:17:11.401 --rc genhtml_function_coverage=1 00:17:11.401 --rc genhtml_legend=1 00:17:11.401 --rc geninfo_all_blocks=1 00:17:11.401 --rc geninfo_unexecuted_blocks=1 00:17:11.401 00:17:11.401 ' 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:11.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.401 --rc genhtml_branch_coverage=1 00:17:11.401 --rc genhtml_function_coverage=1 00:17:11.401 --rc genhtml_legend=1 00:17:11.401 --rc geninfo_all_blocks=1 00:17:11.401 --rc geninfo_unexecuted_blocks=1 00:17:11.401 00:17:11.401 ' 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:11.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.401 --rc genhtml_branch_coverage=1 00:17:11.401 --rc genhtml_function_coverage=1 00:17:11.401 --rc genhtml_legend=1 00:17:11.401 --rc geninfo_all_blocks=1 00:17:11.401 --rc geninfo_unexecuted_blocks=1 00:17:11.401 00:17:11.401 ' 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:11.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.401 --rc genhtml_branch_coverage=1 00:17:11.401 --rc genhtml_function_coverage=1 00:17:11.401 --rc genhtml_legend=1 00:17:11.401 --rc geninfo_all_blocks=1 00:17:11.401 --rc geninfo_unexecuted_blocks=1 00:17:11.401 00:17:11.401 ' 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=101870 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:11.401 23:01:50 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 101870 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 101870 ']' 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.401 23:01:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 [2024-11-26 23:01:50.455223] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:11.401 [2024-11-26 23:01:50.455863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101870 ] 00:17:11.661 [2024-11-26 23:01:50.590108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.661 [2024-11-26 23:01:50.624757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.662 [2024-11-26 23:01:50.651150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.232 Malloc0 00:17:12.232 Malloc1 00:17:12.232 Malloc2 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.232 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.232 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "101161c7-ba0b-4ef6-a859-d6dc122b0d06"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "101161c7-ba0b-4ef6-a859-d6dc122b0d06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "101161c7-ba0b-4ef6-a859-d6dc122b0d06",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4e84d270-333f-4614-99b5-b06266814c13",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "043f4531-2599-43db-8431-d6f513770002",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3e01cbd3-1f25-49e6-9ebd-798ad244e7c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:12.493 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 101870 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 101870 ']' 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 101870 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101870 00:17:12.493 killing process with pid 101870 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101870' 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 101870 00:17:12.493 23:01:51 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 101870 00:17:13.063 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:13.063 23:01:51 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:13.063 23:01:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:13.063 23:01:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.064 23:01:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:13.064 ************************************ 00:17:13.064 START TEST bdev_hello_world 00:17:13.064 ************************************ 00:17:13.064 23:01:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:13.064 [2024-11-26 23:01:52.020018] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:13.064 [2024-11-26 23:01:52.020135] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101915 ] 00:17:13.064 [2024-11-26 23:01:52.157732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:13.323 [2024-11-26 23:01:52.197962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.323 [2024-11-26 23:01:52.225398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.323 [2024-11-26 23:01:52.404361] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:13.323 [2024-11-26 23:01:52.404411] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:13.323 [2024-11-26 23:01:52.404426] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:13.323 [2024-11-26 23:01:52.404735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:13.323 [2024-11-26 23:01:52.404865] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:13.323 [2024-11-26 23:01:52.404882] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:13.323 [2024-11-26 23:01:52.404924] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:13.323 00:17:13.323 [2024-11-26 23:01:52.404950] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:13.584 00:17:13.584 real 0m0.707s 00:17:13.584 user 0m0.373s 00:17:13.584 sys 0m0.228s 00:17:13.584 ************************************ 00:17:13.584 END TEST bdev_hello_world 00:17:13.584 ************************************ 00:17:13.584 23:01:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.584 23:01:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:13.584 23:01:52 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:13.584 23:01:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.584 23:01:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.584 23:01:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:13.584 ************************************ 00:17:13.584 START TEST bdev_bounds 00:17:13.584 ************************************ 00:17:13.584 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:13.584 23:01:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=101941 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 101941' 00:17:13.845 Process bdevio pid: 101941 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 101941 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 101941 ']' 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.845 23:01:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 [2024-11-26 23:01:52.797423] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:13.845 [2024-11-26 23:01:52.797613] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101941 ] 00:17:13.845 [2024-11-26 23:01:52.933125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:14.106 [2024-11-26 23:01:52.971130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.106 [2024-11-26 23:01:53.000690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.106 [2024-11-26 23:01:53.000730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.106 [2024-11-26 23:01:53.000789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.675 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.675 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:14.675 23:01:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:14.675 I/O targets: 00:17:14.675 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:14.675 00:17:14.675 00:17:14.675 CUnit - A unit testing framework for C - Version 2.1-3 00:17:14.675 http://cunit.sourceforge.net/ 00:17:14.675 00:17:14.675 00:17:14.675 Suite: bdevio tests on: raid5f 00:17:14.675 Test: blockdev write read block ...passed 00:17:14.675 Test: blockdev write zeroes read block ...passed 00:17:14.675 Test: blockdev write zeroes read no split ...passed 00:17:14.675 Test: blockdev write zeroes read split ...passed 00:17:14.935 Test: blockdev write zeroes read split partial ...passed 00:17:14.935 Test: blockdev reset ...passed 00:17:14.935 Test: blockdev write read 8 blocks ...passed 00:17:14.935 Test: blockdev write read size > 128k ...passed 00:17:14.935 Test: blockdev write read invalid size ...passed 00:17:14.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:14.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:14.935 Test: blockdev write read max offset ...passed 00:17:14.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:14.935 Test: blockdev writev readv 8 blocks ...passed 00:17:14.935 Test: blockdev writev readv 30 x 1block ...passed 00:17:14.935 Test: blockdev writev readv block ...passed 00:17:14.935 Test: blockdev writev readv size > 128k ...passed 00:17:14.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:14.935 Test: blockdev comparev and writev ...passed 00:17:14.935 Test: blockdev nvme passthru rw ...passed 00:17:14.935 Test: blockdev nvme passthru vendor specific ...passed 00:17:14.935 Test: blockdev nvme admin passthru ...passed 00:17:14.935 Test: blockdev copy ...passed 00:17:14.935 00:17:14.935 Run Summary: Type Total Ran Passed Failed Inactive 00:17:14.935 suites 1 1 n/a 0 0 00:17:14.935 tests 23 23 23 0 0 00:17:14.935 asserts 130 130 130 0 n/a 00:17:14.935 00:17:14.935 Elapsed time = 0.329 seconds 00:17:14.935 0 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 101941 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 101941 ']' 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 101941 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101941 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101941' 00:17:14.935 killing process with pid 101941 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 101941 00:17:14.935 23:01:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 101941 00:17:15.195 23:01:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:15.195 00:17:15.195 real 0m1.433s 00:17:15.195 user 0m3.413s 00:17:15.195 sys 0m0.350s 00:17:15.195 23:01:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.195 23:01:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:15.195 ************************************ 00:17:15.195 END TEST bdev_bounds 00:17:15.195 ************************************ 00:17:15.195 23:01:54 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:15.195 23:01:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:15.195 23:01:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.195 23:01:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.195 ************************************ 00:17:15.195 START TEST bdev_nbd 00:17:15.195 ************************************ 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:15.195 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101989 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:15.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101989 /var/tmp/spdk-nbd.sock 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 101989 ']' 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.196 23:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:15.196 [2024-11-26 23:01:54.303323] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:15.196 [2024-11-26 23:01:54.303933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.456 [2024-11-26 23:01:54.440154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.456 [2024-11-26 23:01:54.476643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.456 [2024-11-26 23:01:54.503042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:16.027 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:16.293 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:16.293 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.294 1+0 records in 00:17:16.294 1+0 records out 00:17:16.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517482 s, 7.9 MB/s 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:16.294 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:16.558 { 00:17:16.558 "nbd_device": "/dev/nbd0", 00:17:16.558 "bdev_name": "raid5f" 00:17:16.558 } 00:17:16.558 ]' 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:16.558 { 00:17:16.558 "nbd_device": "/dev/nbd0", 00:17:16.558 "bdev_name": "raid5f" 00:17:16.558 } 00:17:16.558 ]' 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.558 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.819 23:01:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.079 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.080 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:17.339 /dev/nbd0 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.339 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.340 1+0 records in 00:17:17.340 1+0 records out 00:17:17.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346474 s, 11.8 MB/s 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.340 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:17.599 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:17.599 { 00:17:17.599 "nbd_device": "/dev/nbd0", 00:17:17.599 "bdev_name": "raid5f" 00:17:17.600 } 00:17:17.600 ]' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:17.600 { 00:17:17.600 "nbd_device": "/dev/nbd0", 00:17:17.600 "bdev_name": "raid5f" 00:17:17.600 } 00:17:17.600 ]' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:17.600 256+0 records in 00:17:17.600 256+0 records out 00:17:17.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490644 s, 214 MB/s 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:17.600 256+0 records in 00:17:17.600 256+0 records out 00:17:17.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284253 s, 36.9 MB/s 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.600 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.860 23:01:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:18.120 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:18.380 malloc_lvol_verify 00:17:18.380 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:18.639 978d15bd-5587-4a4f-aacc-5845b65f9282 00:17:18.639 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:18.899 224cc358-ba2c-4ef6-8eec-1767c4b9dc46 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:18.899 /dev/nbd0 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:18.899 23:01:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:18.899 mke2fs 1.47.0 (5-Feb-2023) 00:17:18.899 Discarding device blocks: 0/4096 done 00:17:18.899 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:18.899 00:17:18.899 Allocating group tables: 0/1 done 00:17:18.899 Writing inode tables: 0/1 done 00:17:18.899 Creating journal (1024 blocks): done 00:17:18.899 Writing superblocks and filesystem accounting information: 0/1 done 00:17:18.899 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.899 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101989 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 101989 ']' 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 101989 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101989 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.160 killing process with pid 101989 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101989' 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 101989 00:17:19.160 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 101989 00:17:19.419 23:01:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:19.419 00:17:19.419 real 0m4.285s 00:17:19.419 user 0m6.180s 00:17:19.419 sys 0m1.285s 00:17:19.419 ************************************ 00:17:19.419 END TEST bdev_nbd 00:17:19.419 ************************************ 00:17:19.419 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.419 23:01:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:19.682 23:01:58 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:19.682 23:01:58 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:17:19.682 23:01:58 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:17:19.682 23:01:58 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:19.682 23:01:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.682 23:01:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.682 23:01:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:19.682 ************************************ 00:17:19.682 START TEST bdev_fio 00:17:19.682 ************************************ 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:19.682 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:19.682 ************************************ 00:17:19.682 START TEST bdev_fio_rw_verify 00:17:19.682 ************************************ 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:19.682 23:01:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:19.942 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:19.942 fio-3.35 00:17:19.942 Starting 1 thread 00:17:32.164 00:17:32.164 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102175: Tue Nov 26 23:02:09 2024 00:17:32.164 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec) 00:17:32.164 slat (nsec): min=17513, max=63745, avg=18948.61, stdev=1557.15 00:17:32.164 clat (usec): min=10, max=309, avg=127.19, stdev=44.78 00:17:32.164 lat (usec): min=29, max=339, avg=146.14, stdev=44.94 00:17:32.164 clat percentiles (usec): 00:17:32.164 | 50.000th=[ 133], 99.000th=[ 206], 99.900th=[ 227], 99.990th=[ 262], 00:17:32.164 | 99.999th=[ 302] 00:17:32.164 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(513MiB/9877msec); 0 zone resets 00:17:32.164 slat (usec): min=7, max=237, avg=15.82, stdev= 3.41 00:17:32.164 clat (usec): min=56, max=1745, avg=288.75, stdev=39.23 00:17:32.164 lat (usec): min=71, max=1983, avg=304.57, stdev=40.17 00:17:32.164 clat percentiles (usec): 00:17:32.164 | 50.000th=[ 293], 99.000th=[ 359], 99.900th=[ 570], 99.990th=[ 1029], 00:17:32.164 | 99.999th=[ 1647] 00:17:32.164 bw ( KiB/s): min=50192, max=54640, per=98.65%, avg=52502.32, stdev=1450.21, samples=19 00:17:32.164 iops : min=12548, max=13660, avg=13125.58, stdev=362.55, samples=19 00:17:32.164 lat (usec) : 20=0.01%, 50=0.01%, 100=16.74%, 250=40.03%, 500=43.17% 00:17:32.164 lat (usec) : 750=0.04%, 1000=0.02% 00:17:32.164 lat (msec) : 2=0.01% 00:17:32.164 cpu : usr=98.78%, sys=0.57%, ctx=36, majf=0, minf=13444 00:17:32.164 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.164 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.164 issued rwts: total=126475,131421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:32.164 00:17:32.164 Run status group 0 (all jobs): 00:17:32.164 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:17:32.164 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=513MiB (538MB), run=9877-9877msec 00:17:32.164 ----------------------------------------------------- 00:17:32.164 Suppressions used: 00:17:32.164 count bytes template 00:17:32.164 1 7 /usr/src/fio/parse.c 00:17:32.164 1004 96384 /usr/src/fio/iolog.c 00:17:32.164 1 8 libtcmalloc_minimal.so 00:17:32.164 1 904 libcrypto.so 00:17:32.164 ----------------------------------------------------- 00:17:32.164 00:17:32.164 00:17:32.164 real 0m11.270s 00:17:32.164 user 0m11.482s 00:17:32.164 sys 0m0.698s 00:17:32.164 ************************************ 00:17:32.164 END TEST bdev_fio_rw_verify 00:17:32.164 ************************************ 00:17:32.164 23:02:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.164 23:02:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:32.164 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "101161c7-ba0b-4ef6-a859-d6dc122b0d06"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "101161c7-ba0b-4ef6-a859-d6dc122b0d06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "101161c7-ba0b-4ef6-a859-d6dc122b0d06",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4e84d270-333f-4614-99b5-b06266814c13",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "043f4531-2599-43db-8431-d6f513770002",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3e01cbd3-1f25-49e6-9ebd-798ad244e7c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.165 /home/vagrant/spdk_repo/spdk 00:17:32.165 ************************************ 00:17:32.165 END TEST bdev_fio 00:17:32.165 ************************************ 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:32.165 00:17:32.165 real 0m11.563s 00:17:32.165 user 0m11.606s 00:17:32.165 sys 0m0.838s 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.165 23:02:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:32.165 23:02:10 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:32.165 23:02:10 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:32.165 23:02:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:32.165 23:02:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.165 23:02:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:32.165 ************************************ 00:17:32.165 START TEST bdev_verify 00:17:32.165 ************************************ 00:17:32.165 23:02:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:32.165 [2024-11-26 23:02:10.289331] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:32.165 [2024-11-26 23:02:10.289431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102335 ] 00:17:32.165 [2024-11-26 23:02:10.424311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:32.165 [2024-11-26 23:02:10.464155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.165 [2024-11-26 23:02:10.494644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.165 [2024-11-26 23:02:10.494704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.165 Running I/O for 5 seconds... 00:17:33.674 16042.00 IOPS, 62.66 MiB/s [2024-11-26T23:02:13.744Z] 13497.50 IOPS, 52.72 MiB/s [2024-11-26T23:02:15.126Z] 12614.33 IOPS, 49.27 MiB/s [2024-11-26T23:02:16.100Z] 12174.75 IOPS, 47.56 MiB/s [2024-11-26T23:02:16.100Z] 11909.40 IOPS, 46.52 MiB/s 00:17:36.972 Latency(us) 00:17:36.972 [2024-11-26T23:02:16.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.972 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.972 Verification LBA range: start 0x0 length 0x2000 00:17:36.972 raid5f : 5.02 6916.61 27.02 0.00 0.00 27804.49 205.28 42498.72 00:17:36.972 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.972 Verification LBA range: start 0x2000 length 0x2000 00:17:36.972 raid5f : 5.02 4998.31 19.52 0.00 0.00 38434.48 253.48 35644.09 00:17:36.972 [2024-11-26T23:02:16.100Z] =================================================================================================================== 00:17:36.972 [2024-11-26T23:02:16.100Z] Total : 11914.93 46.54 0.00 0.00 32265.84 205.28 42498.72 00:17:36.972 00:17:36.972 real 0m5.886s 00:17:36.972 user 0m10.932s 00:17:36.972 sys 0m0.249s 00:17:37.245 ************************************ 00:17:37.246 END TEST bdev_verify 00:17:37.246 ************************************ 00:17:37.246 23:02:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.246 23:02:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:37.246 23:02:16 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:37.246 23:02:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:37.246 23:02:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.246 23:02:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:37.246 ************************************ 00:17:37.246 START TEST bdev_verify_big_io 00:17:37.246 ************************************ 00:17:37.246 23:02:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:37.246 [2024-11-26 23:02:16.266233] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:37.246 [2024-11-26 23:02:16.266386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102414 ] 00:17:37.505 [2024-11-26 23:02:16.408214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:37.505 [2024-11-26 23:02:16.448770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:37.505 [2024-11-26 23:02:16.502840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.505 [2024-11-26 23:02:16.502920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.768 Running I/O for 5 seconds... 00:17:40.088 633.00 IOPS, 39.56 MiB/s [2024-11-26T23:02:20.154Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T23:02:21.093Z] 782.00 IOPS, 48.88 MiB/s [2024-11-26T23:02:22.047Z] 792.75 IOPS, 49.55 MiB/s [2024-11-26T23:02:22.047Z] 799.00 IOPS, 49.94 MiB/s 00:17:42.919 Latency(us) 00:17:42.919 [2024-11-26T23:02:22.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.919 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.919 Verification LBA range: start 0x0 length 0x200 00:17:42.919 raid5f : 5.23 461.16 28.82 0.00 0.00 6942739.72 182.97 307087.54 00:17:42.919 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.919 Verification LBA range: start 0x200 length 0x200 00:17:42.919 raid5f : 5.11 347.83 21.74 0.00 0.00 9094193.65 340.95 387515.23 00:17:42.919 [2024-11-26T23:02:22.047Z] =================================================================================================================== 00:17:42.919 [2024-11-26T23:02:22.047Z] Total : 808.98 50.56 0.00 0.00 7855913.53 182.97 387515.23 00:17:43.490 00:17:43.490 real 0m6.210s 00:17:43.490 user 0m11.410s 00:17:43.490 sys 0m0.364s 00:17:43.490 23:02:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.490 23:02:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.490 ************************************ 00:17:43.490 END TEST bdev_verify_big_io 00:17:43.490 ************************************ 00:17:43.490 23:02:22 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.490 23:02:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:43.490 23:02:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.490 23:02:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:43.490 ************************************ 00:17:43.490 START TEST bdev_write_zeroes 00:17:43.490 ************************************ 00:17:43.490 23:02:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.490 [2024-11-26 23:02:22.540247] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:43.490 [2024-11-26 23:02:22.540364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102501 ] 00:17:43.750 [2024-11-26 23:02:22.675631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:43.750 [2024-11-26 23:02:22.712217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.750 [2024-11-26 23:02:22.754894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.011 Running I/O for 1 seconds... 00:17:44.952 29103.00 IOPS, 113.68 MiB/s 00:17:44.952 Latency(us) 00:17:44.952 [2024-11-26T23:02:24.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.952 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:44.952 raid5f : 1.01 29075.69 113.58 0.00 0.00 4389.11 1549.43 5940.68 00:17:44.952 [2024-11-26T23:02:24.080Z] =================================================================================================================== 00:17:44.952 [2024-11-26T23:02:24.080Z] Total : 29075.69 113.58 0.00 0.00 4389.11 1549.43 5940.68 00:17:45.523 ************************************ 00:17:45.523 END TEST bdev_write_zeroes 00:17:45.523 00:17:45.523 real 0m1.938s 00:17:45.523 user 0m1.513s 00:17:45.523 sys 0m0.311s 00:17:45.523 23:02:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.523 23:02:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:45.523 ************************************ 00:17:45.523 23:02:24 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:45.523 23:02:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:45.523 23:02:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.523 23:02:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:45.523 ************************************ 00:17:45.523 START TEST bdev_json_nonenclosed 00:17:45.523 ************************************ 00:17:45.523 23:02:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:45.523 [2024-11-26 23:02:24.576081] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:45.523 [2024-11-26 23:02:24.576344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102539 ] 00:17:45.783 [2024-11-26 23:02:24.719138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:45.783 [2024-11-26 23:02:24.759144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.783 [2024-11-26 23:02:24.810036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.783 [2024-11-26 23:02:24.810185] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:45.783 [2024-11-26 23:02:24.810210] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:45.783 [2024-11-26 23:02:24.810223] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:46.043 00:17:46.044 real 0m0.454s 00:17:46.044 user 0m0.179s 00:17:46.044 sys 0m0.171s 00:17:46.044 23:02:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.044 23:02:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:46.044 ************************************ 00:17:46.044 END TEST bdev_json_nonenclosed 00:17:46.044 ************************************ 00:17:46.044 23:02:24 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.044 23:02:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:46.044 23:02:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.044 23:02:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:46.044 ************************************ 00:17:46.044 START TEST bdev_json_nonarray 00:17:46.044 ************************************ 00:17:46.044 23:02:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.044 [2024-11-26 23:02:25.094702] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:46.044 [2024-11-26 23:02:25.094837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102570 ] 00:17:46.304 [2024-11-26 23:02:25.235069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:46.304 [2024-11-26 23:02:25.275360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.304 [2024-11-26 23:02:25.319733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.304 [2024-11-26 23:02:25.319887] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:46.304 [2024-11-26 23:02:25.319912] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:46.304 [2024-11-26 23:02:25.319933] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:46.565 00:17:46.565 real 0m0.431s 00:17:46.565 user 0m0.175s 00:17:46.565 sys 0m0.151s 00:17:46.565 23:02:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.566 ************************************ 00:17:46.566 END TEST bdev_json_nonarray 00:17:46.566 ************************************ 00:17:46.566 23:02:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:46.566 23:02:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:46.566 00:17:46.566 real 0m35.411s 00:17:46.566 user 0m47.685s 00:17:46.566 sys 0m5.028s 00:17:46.566 23:02:25 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.566 ************************************ 00:17:46.566 END TEST blockdev_raid5f 00:17:46.566 ************************************ 00:17:46.566 23:02:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 23:02:25 -- spdk/autotest.sh@194 -- # uname -s 00:17:46.566 23:02:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:46.566 23:02:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:46.566 23:02:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:46.566 23:02:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:46.566 23:02:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:46.566 23:02:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:46.566 23:02:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.566 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 23:02:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:46.566 23:02:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:46.566 23:02:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:46.566 23:02:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:17:46.567 23:02:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:46.567 23:02:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:46.567 23:02:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:17:46.567 23:02:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:17:46.567 23:02:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:17:46.567 23:02:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:17:46.567 23:02:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.567 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:46.567 23:02:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:17:46.567 23:02:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:17:46.567 23:02:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:17:46.567 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:49.124 INFO: APP EXITING 00:17:49.124 INFO: killing all VMs 00:17:49.124 INFO: killing vhost app 00:17:49.124 INFO: EXIT DONE 00:17:49.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:49.699 Waiting for block devices as requested 00:17:49.699 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:49.699 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:50.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.640 Cleaning 00:17:50.640 Removing: /var/run/dpdk/spdk0/config 00:17:50.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:50.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:50.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:50.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:50.640 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:50.903 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:50.903 Removing: /dev/shm/spdk_tgt_trace.pid70727 00:17:50.903 Removing: /var/run/dpdk/spdk0 00:17:50.903 Removing: /var/run/dpdk/spdk_pid100636 00:17:50.903 Removing: /var/run/dpdk/spdk_pid100955 00:17:50.903 Removing: /var/run/dpdk/spdk_pid101618 00:17:50.903 Removing: /var/run/dpdk/spdk_pid101870 00:17:50.903 Removing: /var/run/dpdk/spdk_pid101915 00:17:50.903 Removing: /var/run/dpdk/spdk_pid101941 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102164 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102335 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102414 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102501 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102539 00:17:50.903 Removing: /var/run/dpdk/spdk_pid102570 00:17:50.903 Removing: /var/run/dpdk/spdk_pid70547 00:17:50.903 Removing: /var/run/dpdk/spdk_pid70727 00:17:50.903 Removing: /var/run/dpdk/spdk_pid70934 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71016 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71044 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71156 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71174 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71362 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71430 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71515 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71615 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71701 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71735 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71766 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71842 00:17:50.903 Removing: /var/run/dpdk/spdk_pid71954 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72381 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72434 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72476 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72492 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72555 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72566 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72635 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72651 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72693 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72711 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72753 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72771 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72911 00:17:50.903 Removing: /var/run/dpdk/spdk_pid72942 00:17:50.903 Removing: /var/run/dpdk/spdk_pid73031 00:17:50.903 Removing: /var/run/dpdk/spdk_pid74204 00:17:50.903 Removing: /var/run/dpdk/spdk_pid74406 00:17:50.903 Removing: /var/run/dpdk/spdk_pid74535 00:17:50.903 Removing: /var/run/dpdk/spdk_pid75134 00:17:50.903 Removing: /var/run/dpdk/spdk_pid75335 00:17:50.903 Removing: /var/run/dpdk/spdk_pid75464 00:17:50.903 Removing: /var/run/dpdk/spdk_pid76074 00:17:51.164 Removing: /var/run/dpdk/spdk_pid76393 00:17:51.164 Removing: /var/run/dpdk/spdk_pid76522 00:17:51.164 Removing: /var/run/dpdk/spdk_pid77852 00:17:51.164 Removing: /var/run/dpdk/spdk_pid78094 00:17:51.164 Removing: /var/run/dpdk/spdk_pid78223 00:17:51.164 Removing: /var/run/dpdk/spdk_pid79554 00:17:51.164 Removing: /var/run/dpdk/spdk_pid79795 00:17:51.164 Removing: /var/run/dpdk/spdk_pid79924 00:17:51.164 Removing: /var/run/dpdk/spdk_pid81256 00:17:51.164 Removing: /var/run/dpdk/spdk_pid81686 00:17:51.164 Removing: /var/run/dpdk/spdk_pid81825 00:17:51.164 Removing: /var/run/dpdk/spdk_pid83255 00:17:51.164 Removing: /var/run/dpdk/spdk_pid83503 00:17:51.164 Removing: /var/run/dpdk/spdk_pid83639 00:17:51.164 Removing: /var/run/dpdk/spdk_pid85080 00:17:51.164 Removing: /var/run/dpdk/spdk_pid85329 00:17:51.164 Removing: /var/run/dpdk/spdk_pid85463 00:17:51.164 Removing: /var/run/dpdk/spdk_pid86899 00:17:51.164 Removing: /var/run/dpdk/spdk_pid87376 00:17:51.164 Removing: /var/run/dpdk/spdk_pid87511 00:17:51.164 Removing: /var/run/dpdk/spdk_pid87638 00:17:51.164 Removing: /var/run/dpdk/spdk_pid88045 00:17:51.164 Removing: /var/run/dpdk/spdk_pid88771 00:17:51.164 Removing: /var/run/dpdk/spdk_pid89157 00:17:51.164 Removing: /var/run/dpdk/spdk_pid89830 00:17:51.164 Removing: /var/run/dpdk/spdk_pid90255 00:17:51.164 Removing: /var/run/dpdk/spdk_pid90998 00:17:51.164 Removing: /var/run/dpdk/spdk_pid91385 00:17:51.164 Removing: /var/run/dpdk/spdk_pid93306 00:17:51.164 Removing: /var/run/dpdk/spdk_pid93729 00:17:51.164 Removing: /var/run/dpdk/spdk_pid94152 00:17:51.164 Removing: /var/run/dpdk/spdk_pid96185 00:17:51.164 Removing: /var/run/dpdk/spdk_pid96655 00:17:51.164 Removing: /var/run/dpdk/spdk_pid97140 00:17:51.164 Removing: /var/run/dpdk/spdk_pid98174 00:17:51.164 Removing: /var/run/dpdk/spdk_pid98485 00:17:51.164 Removing: /var/run/dpdk/spdk_pid99402 00:17:51.164 Removing: /var/run/dpdk/spdk_pid99716 00:17:51.164 Clean 00:17:51.164 23:02:30 -- common/autotest_common.sh@1453 -- # return 0 00:17:51.164 23:02:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:17:51.165 23:02:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.165 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.425 23:02:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:17:51.425 23:02:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.425 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.425 23:02:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:51.425 23:02:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:51.425 23:02:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:51.425 23:02:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:17:51.425 23:02:30 -- spdk/autotest.sh@398 -- # hostname 00:17:51.425 23:02:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:51.684 geninfo: WARNING: invalid characters removed from testname! 00:18:18.250 23:02:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:18.250 23:02:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:19.630 23:02:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:21.539 23:03:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:23.450 23:03:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:25.444 23:03:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:28.007 23:03:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:28.007 23:03:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:28.007 23:03:06 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:28.007 23:03:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:28.007 23:03:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:28.007 23:03:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:28.007 + [[ -n 6168 ]] 00:18:28.007 + sudo kill 6168 00:18:28.018 [Pipeline] } 00:18:28.034 [Pipeline] // timeout 00:18:28.039 [Pipeline] } 00:18:28.054 [Pipeline] // stage 00:18:28.059 [Pipeline] } 00:18:28.074 [Pipeline] // catchError 00:18:28.084 [Pipeline] stage 00:18:28.086 [Pipeline] { (Stop VM) 00:18:28.099 [Pipeline] sh 00:18:28.399 + vagrant halt 00:18:30.950 ==> default: Halting domain... 00:18:39.095 [Pipeline] sh 00:18:39.378 + vagrant destroy -f 00:18:41.914 ==> default: Removing domain... 00:18:41.928 [Pipeline] sh 00:18:42.213 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:42.223 [Pipeline] } 00:18:42.235 [Pipeline] // stage 00:18:42.240 [Pipeline] } 00:18:42.254 [Pipeline] // dir 00:18:42.259 [Pipeline] } 00:18:42.272 [Pipeline] // wrap 00:18:42.278 [Pipeline] } 00:18:42.291 [Pipeline] // catchError 00:18:42.299 [Pipeline] stage 00:18:42.302 [Pipeline] { (Epilogue) 00:18:42.312 [Pipeline] sh 00:18:42.597 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:46.802 [Pipeline] catchError 00:18:46.803 [Pipeline] { 00:18:46.814 [Pipeline] sh 00:18:47.095 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:47.095 Artifacts sizes are good 00:18:47.105 [Pipeline] } 00:18:47.118 [Pipeline] // catchError 00:18:47.129 [Pipeline] archiveArtifacts 00:18:47.136 Archiving artifacts 00:18:47.254 [Pipeline] cleanWs 00:18:47.266 [WS-CLEANUP] Deleting project workspace... 00:18:47.266 [WS-CLEANUP] Deferred wipeout is used... 00:18:47.293 [WS-CLEANUP] done 00:18:47.295 [Pipeline] } 00:18:47.307 [Pipeline] // stage 00:18:47.312 [Pipeline] } 00:18:47.324 [Pipeline] // node 00:18:47.330 [Pipeline] End of Pipeline 00:18:47.369 Finished: SUCCESS